You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Morgan Rogers (he/him) said:
William Troiani and I set up a Twitch stream where we have maths research discussions live; here is our channel. We're keeping our discussion to "side projects" which enable us to learn stuff and for which we don't mind taking the "risks" of open discussion; our first project is about machine learning and cryptography.
We're planning another one this Saturday morning; I expect we'll start at around 10am (Paris time) although we're still ironing out the technical set-up so it might be later. Drop by if you like!
We're still setting up for this but will be live in a short while!
[technical issues ensue]
It was fun! We spent the stream understanding SHA-2 so that we can mess around with variations of it next time. Thanks to those who joined!
Looks like @William Troiani and I will be doing another stream tomorrow morning (again from around *11am Paris time). Take a peek if you have a chance.
Starting soon (I would apologise for lateness, but our audience isn't quite big enough to hold us accountable yet).
Late notice that @William Troiani and I will be streaming from appoximately 3:30pm Paris time today. We read a couple of papers last time and decided how we were going to implement our miniature version of SHA-2. In the mean time we have implemented that in Python. This week we will be discussing neural nets and potential pitfalls of our project overall. See you there!
In spite of neither of us having had time to complete the promised programming from last time, @William Troiani and I will be doing another research stream from 1pm Paris time today (beware that clocks changed this morning so that might be an hour sooner than you expect). Please do come along even if you haven't been following, we can always use feedback ;)
To mix things up, Will and I are going to try out Monday morning (we'll start shortly after 9am Paris time I expect; I'll post here if it looks like it will be later). If that gives you an excuse to check it out, then see you there!
Hi everyone, I moved posts about the Binary Operation Twitch stream to #general: events from the topic on my work. We now have a Mastodon account and an Instagram account, although we're still deciding how to use both of these; please follow if you're on these platforms!
The next stream will be tomorrow (Sunday) from 9:30am Paris time. We'll be diving deeper into finding out what asymptotic invariance of the distribution of stuff in the universe would look like. If that's a strange enough sentence to make you curious, please do drop by... It will be a physics-inspired examination of how symmetry interacts with complexity classes/asymptotic behaviour of functions.
If you don't have a Twitch account (you don't need one to watch, but you would need one to leave a comment during the stream), feel free to send questions either here or through the platforms mentioned above.
Hi folks, the next stream will be tomorrow at 4pm Paris time, on the usual channel.
Last time we talked about asymptotic equations, but when I tried to look these up on stream, the kinds of equation that came up were not of the form that we were considering. I wonder if anyone here has thought about asymptotic analysis and could lend us some insight?
(starting soon)
Hi folks! Relatively short notice, but we will be streaming today starting at 4pm Paris time, on the usual channel. Last time we figured out what it means to solve an asymptotic equation in one dimension; this time we'll be jumping up to three! I expect this will be the last session on asymptotic analysis (unless we make an exciting breakthrough that we can't help following up on), so do send us some suggestions for what you'd like to see us think about next.
Also, I'd like to give a shout out to @Ralph Sarkis, who has managed to attend every stream up to now and is our all-round biggest supporter. Thanks Ralph!
Thanks for the shout-out! I'll repeat what I said in private: I have enjoyed your streams very much, and they inspire me to follow your lead in sharing the process of doing research (I have many ideas, but I'll let them mature a bit).
Unfortunately, I can't join today.
Hi everyone. We skipped a week, but we'll be back today from 3pm Paris time. See you there!
This week will probably be a recap of what we've been up to on the stream, plus some spitballing of ideas of what to do next, so come along if you haven't had a chance to yet!
Bonus! It looks like we will have Giti Omidvar, a PhD student in proof theory, as a guest this week. A ternary operation for a change! ( @William Troiani would say "trinary", but you must forgive him. )
I'm enjoying listening to the recording of this last one! It's quite a different experience from reading a book, in a refreshing way.
I especially appreciate how you are all taking time and effort to explain the notation being introduced. For example, that horizontal line syntax (natural deduction syntax?) has scared me away from several books before, so it's awesome to see it discussed and not just introduced without comment!
I also enjoyed learning a new word: "meta-notation"! Although I confess I don't understand precisely what it means, the general idea of notation that doesn't live in some specific formal context (but instead in some "meta" context?) is an interesting one.
I don't know what "meta-notation" is, but it reminds me of "meta-variables" in logic. In the traditional approach to first-order logic we often want to say stuff like "for any variable and any formula , there is a formula . But variables are specific symbols, so "for any variable " doesn't make sense unless is a meta-variable - a symbol that can stand for any variable!
I remember being delighted by this when I was first learning logic.
John Baez said:
I don't know what "meta-notation" is, but it reminds me of "meta-variables" in logic. In the traditional approach to first-order logic we often want to say stuff like "for any variable and any formula , there is a formula . But variables are specific symbols, so "for any variable " doesn't make sense unless is a meta-variable - a symbol that can stand for any variable!
Oh, that's interesting! Although I'm not sure I follow.
If "usual" variables are used to denote an arbitrary real number (for example), is the idea that "meta" variables are used to denote an arbitrary usual variable? For example, let be some real number and be some real number and then let be either or . Would be a meta variable?
Presumably you could have multiple meta variables used to denote arbitrary elements of different collections of usual variables. Then I suppose we could have a meta-meta variable, which we could use to denote an arbitrary element of a collection of meta variables!
I'm not sure this is the right idea, though.
In traditional logic variables are specific symbols , which you use in "formulas" like
(here is a "predicate") or
We need to know exactly what these symbols are, or we don't know what counts as a formula. For example
(:car: )
is not a formula unless you decreed that :car: is a variable. Get it? Once you've set up your system of logic, you don't get to scrawl arbitrary junk and call that a formula.
But now say I want to state a rule of logic, like
This is my feeble attempt to write a "deduction rule" which means "from you can deduce ". That horizontal line thing is the thing that was scaring you earlier.
The problem though is that I just wrote this rule with a specific variable symbol ! So this rule doesn't help me at all with !
What I want to say is that this rule works for any variable symbol - and any predicate symbol .
So I should write something like
Here the guys in boldface are "meta-variables": and stand for an arbitrary variable symbol, and stands for an arbitrary predicate symbol
From this rule I can get zillions of specific rules like
etc.
This "meta" stuff is showing up because we are trying, not just to use logic, but talk about logic in a logical way.
If I understand correctly, and are different formulas because they use different symbols. However, intuitively it seems like they should mean the same thing: for any input provided to , evaluates to true when fed that input.
The meta stuff then comes in because we want to notice a pattern for rules that is valid across the change of symbols, I think (?). So, for a rule in terms of a meta-variable, any specific choice of symbol substituted in for our meta-variable yields a rule in terms of that symbol.
(And you've gone the extra mile and also used a meta-variable for the predicate, so that we need to substitute in two specific things (a choice of symbol and a choice of predicate) to get a rule in terms of specific symbols and predicates.)
I think it's making more sense now. Thanks for explaining!
David Egolf said:
If I understand correctly, and are different formulas because they use different symbols. However, intuitively it seems like they should mean the same thing: for any input provided to , evaluates to true when fed that input.
Yes, they mean the same thing. Any respectable system of logic would let you show
and have deduction rules that imply the deduction rule
and
But they are different strings of symbols written on the page. Traditional logic focuses a lot on 'syntax': that is, roughly, expressions you write on the page, and rules for manipulating them.
The goal was to figure out whether, or how, we can capture everything we know about mathematics in some system for manipulating strings symbols written on the page.
Nowadays some people focus more on computer code rather than symbols on the page, but the ideas are similar: we're trying to reduce mathematics to completely formal rules for manipulating data.
David Egolf said:
The meta stuff then comes in because we want to notice a pattern for rules that is valid across the change of symbols, I think (?). So, for a rule in terms of a meta-variable, any specific choice of symbol substituted in for our meta-variable yields a rule in terms of that symbol.
Right! Or in other words, to state general rules for manipulating symbols, we need symbols that stand for these symbols!
Back to the twitch stream, I would also like to say that the discussed mapping from formulas to graphs is a lot of fun to hear about! Although it's surprising to me that a formula and its negation are sent to isomorphic graphs.
Really fun and interesting to watch, thanks for doing this!
Connecting the stream with this discussion, we were looking at Natural Deduction (one of the formal systems for expressing proofs in classical logic). The definition of the system introduces proofs and "discharging" formulas, expressed in the definitions using "meta-notation" to stand for hypothetical proofs and an implicitly performed operation (of removing a hypothesis), respectively.
We all concluded that it's a horrible system, but I'm really glad our talking about it was interesting :star_struck:
By the way @David Egolf were you the mystery commenter during the stream?
Morgan Rogers (he/him) said:
By the way David Egolf were you the mystery commenter during the stream?
No, I was not the mystery commenter. I was watching the archived video, well after you finished streaming. I'm glad that option was available, as the time zones wouldn't have worked out otherwise! (I'm on the west coast of Canada.)
It's fun to read about the origins of natural deduction:
Natural deduction grew out of a context of dissatisfaction with the axiomatizations of deductive reasoning common to the systems of Hilbert, Frege, and Russell (see, e.g., Hilbert system). Such axiomatizations were most famously used by Russell and Whitehead in their mathematical treatise Principia Mathematica. Spurred on by a series of seminars in Poland in 1926 by Łukasiewicz that advocated a more natural treatment of logic, Jaśkowski made the earliest attempts at defining a more natural deduction, first in 1929 using a diagrammatic notation, and later updating his proposal in a sequence of papers in 1934 and 1935.His proposals led to different notations such as Fitch-style calculus (or Fitch's diagrams) or Suppes' method for which Lemmon gave a variant called system L.
Natural deduction in its modern form was independently proposed by the German mathematician Gerhard Gentzen in 1933, in a dissertation delivered to the faculty of mathematical sciences of the University of Göttingen. The term natural deduction (or rather, its German equivalent natürliches Schließen) was coined in that paper.
Changing to is called "-equivalence", and is indeed usually something that no one bothers about. In fact there are ways to make logic precise in such a way that this doesn't even arise, such as using "De Bruijn indices" instead of named variables. I think the more important meta-variable is , which could stand not only for a variable but for an arbitrary formula, so that could mean for instance .
Well, while in some sense "no one bothers about" -equivalence, logicians and computer programmers do have to deal with it, one way or another - including tricks like De Bruijn indices, which I believe are implemented in some software. (Is that right?)
Yeah, that's right. They're a fairly common way to implement parsers, interpreters, &c. for functional languages.
Also, fwiw, in block-style math on Zulip you can write deductions / typing rules using \cfrac
, if you're interested:
Thanks - that's the best thing I've learned all day!
FWIF, I would separate the premises by \quad
or \qquad
instead of comma:
(Or \and
when using \inferrule
.)
Does \inferrule
work here on Zulip? I'm trying it and I can't get it to work. What's the syntax?
Anyway, to stay on topic, I guess I must reply to this:
Morgan Rogers (he/him) said:
we were looking at Natural Deduction [...] We all concluded that it's a horrible system.
I wouldn't say that it is so horrible. Of all the available systems in proof theory, it is still the most "natural" in the sense that it reflects quite closely the flow of reasoning that we develop when we prove something. And it is very well structured: it has symmetries (introduction/elimination), a notion of analytic proof, etc., not to mention that the correspondence with the -calculus, known as the Curry-Howard isomorphism, is really an isomorphism only for natural deduction. Other proof systems may be related to computation, but only the proofs of natural deduction are exactly -terms. This may not be fundamental (some people, who prefer sequent calculus, may even see it as a limitation), but I wouldn't say it is horrible! :big_smile:
From my perspective, natural deduction is much better, for instance, than the proof system presented in chapter D1 of the Elephant. That is a horrible system! For example, I don't see any way of extracting from it a notion of analytic proof, that is, something satisfying the property that, in order to prove a formula , you only use subformulas of . This property is extremely important in proof theory and both natural deduction and sequent calculus admit complete subsystems which verify it (this is basically Gentzen's cut-elimination theorem).
But of course I suppose that if your viewpoint is purely model-theoretic/category-theoretic, then of course the Elephant's system makes more sense and might be more appealing :smile:
I believe Curry-Howard has been downgraded to a “correspondence”. To me, this is because, if you’re proving an isomorphism between two formal systems, what system do you use to do your proof?
You use whatever system of logic you want. I don't think that's why the Curry-Howard correspondence is not considered an isomorphism by some: I think it's because they don't know a category in which the two things in the correspondence are isomorphic objects. I.e. they don't know a general kind of thing, such that the correspondence is an isomorphism between two of these things.
John Baez said:
Well, while in some sense "no one bothers about" -equivalence, logicians and computer programmers do have to deal with it, one way or another - including tricks like De Bruijn indices, which I believe are implemented in some software. (Is that right?)
Yes, agreed. I just meant that I don't think dealing with -equivalence is the primary application of metavariables.
John Baez said:
You use whatever system of logic you want. I don't think that's why the Curry-Howard correspondence is not considered an isomorphism by some: I think it's because they don't know a category in which the two things in the correspondence are isomorphic objects. I.e. they don't know a general kind of thing, such that the correspondence is an isomorphism between two of these things.
Proving something about a formal system in that formal system feels very paradoxical to me.
dan pittman said:
Proving something about a formal system in that formal system feels very paradoxical to me.
Why more so than making statements about English in English?
Regarding natural deduction, I'm not sure if this is exactly what you meant, but I agree that it looks pretty bizarre when you write it with hypotheses at the top of proof trees, and have to "discharge" them when you do hypothetical reasoning like so:
However, there's a much nicer way to write it, by making the context of every judgment explicit. Writing to mean the judgment that is true in context , the above rule becomes
In other words, when proving in the premise, we get to assume an additional hypothesis of relative to the assumptions used for the conclusion .
Mike Shulman said:
Why more so than making statements about English in English?
So my answer here is more philosophical, and probably veers into personal belief territory, but, I would say the same about English to English to be honest. Bear with me—I go back to my understanding of what Russell's original intention was with type theory, that because it was predicative, it was therefore not susceptible to the "set of all sets" problem. This of course was found not to be true either—what we now call Girard's paradox—out of which comes (eventually) Pure Type Systems which have an infinite hierarchy of universes, staving off that paradox on into infinity. It seems to me, that to link two entirely independent systems, you would have to step beyond that infinity. Now, that said, going back to your question about English to English, I agree with you in some ways, because what other choice do we have?! Maybe y'all are saying the same thing about the CHC, but w/r/t personal belief I find myself thinking often about the (mis)quoted epigraph from Le Guin's *Lathe of Heaven*:
To let understanding stop at what cannot be understood is a high attainment. Those who cannot do it will be destroyed on the lathe of heaven.
This also feels related to type theory's initiality problem. I remember reading a thread on the HoTT mailing list years ago, where (maybe it was even you that said it, @Mike Shulman !) that to do type theory in type theory we'd have to formalize pen and paper!
Mike Shulman said:
Regarding natural deduction, I'm not sure if this is exactly what you meant, but I agree that it looks pretty bizarre when you write it with hypotheses at the top of proof trees, and have to "discharge" them when you do hypothetical reasoning like so:
However, there's a much nicer way to write it, by making the context of every judgment explicit. Writing $\Gamma \vdash A$ to mean the judgment that $A$ is true in context $\Gamma$, the above rule becomes
In other words, when proving $B$ in the premise, we get to assume an additional hypothesis of $A$ relative to the assumptions used for the conclusion $A\to B$.
Isn't that the translation into sequent calculus? The proof tree syntax was exactly our complaint about natural deduction.
@Morgan Rogers (he/him) Well, people do use language in different ways, and in particular some people call anything that looks like "" a "sequent". But I've been taught (by type theorists like Dan Licata) that the difference between "natural deduction" and "sequent calculus" lies in the treatment of contexts and cut vs substitution, rather than in how you write the rules on the page. For instance, the natural deduction rule for eliminating disjunction, written with explicit contexts, is
whereas the sequent calculus rule is
In general, each connective in sequent calculus comes with "left rules" that introduce that connective on the left of the turnstile and "right rules" that introduce it on the right, while each connective in natural deduction comes with "introduction rules" that introduce it on the right (often the same as the sequent-calculus right-rules) and "elimination rules" that contain it on the right in a premise.
The guiding principle of natural deduction is that the conclusion of every rule should have an arbitrary context. This allows substitution / composition to be defined by a simple induction, preserving the form of all rules. But it results in "redexes", proofs where a connective is introduced and then immediately eliminated, and that leads us to introduce a process of -reduction that "normalizes" a proof.
By contrast, the guiding principle of sequent calculus is that the formulae in the premises are always simpler than those in the conclusion. This is great for automated proof search, because you can successively pick formulas to "focus" on and apply their rules, trusting that the search space is constrained by the complexity of the formulas you started with. This relies in particular on a theorem that every theorem has a proof not using the "cut rule"
because it brings in a completetly arbitrary formula in the premises. The proof that cut in sequent calculus is "eliminable" or "admissible" involves more or less the same computations as the proof that any natural deduction proof has a normal form without redexes, so you can sort of think of sequent calculus as a way of directly presenting only the normal forms of proofs.
In real life things get much more complicated than that, and lines get blurry, but in principle that's the idea of the distinction as I understand it.
Mike Shulman said:
John Baez said:
Well, while in some sense "no one bothers about" -equivalence, logicians and computer programmers do have to deal with it, one way or another - including tricks like De Bruijn indices, which I believe are implemented in some software. (Is that right?)
Yes, agreed. I just meant that I don't think dealing with -equivalence is the primary application of metavariables.
Agreed. I was just illustrating a random law of first-order logic that involves variables, and explaining to @David Egolf that to state such a law we often use metavariables to stand for arbitrary variables. I should have picked a more exciting law.
dan pittman said:
Proving something about a formal system in that formal system feels very paradoxical to me.
If you don't like It, use some other formal system.
But it's really quite important that formal systems can "talk about themselves". When we take something like the Liar Paradox and try to express it precisely using the ability of formal systems to refer to themselves, we don't get a paradox: instead, we get important results like Tarski's undefinability theorem: very roughly, it's impossible to define arithmetical truth in arithmetic.
John Baez said:
If you don't like It, use some other formal system.
But it's really quite important that formal systems can "talk about themselves". When we take something like the Liar Paradox and try to express it precisely using the ability of formal systems to refer to themselves, we don't get a paradox: instead, we get important results like Tarski's undefinability theorem: very roughly, it's impossible to define arithmetical truth in arithmetic.
Yes, yes. I agree! I think the sentiment you're expressing via Tarski's theorem is all I mean when I say it "feels paradoxical". Secondarily, I agree that it's important, if not straight up compulsory for systems to be able to talk about themselves. Going back to Mike's point about English in English—because there isn't a metatheory, really, of English, we have no other choice!
Okay, sorry - I thought the "feeling paradoxical" business was intended as an objection. If you said "it feels delightfully paradoxical" I would just have agreed. :upside_down:
Morgan Rogers (he/him) said:
The proof tree syntax was exactly our complaint about natural deduction.
Let me add something about this. You may not like it, but the proof tree syntax is what makes natural deduction "natural". When you write down a proof in "real life" (for example, in a journal paper), you usually proceed by successively building statements from hypotheses or from previously proved statements. Natural deduction formalizes this by saying that you build a proof tree by always working at the root of the tree. In its proof-tree formulation, the rules of natural deduction do precisely that: half of them (the introduction rules) tell you how to get a bigger formula at the root from smaller ones, the other half (the elimination rules) tell you how to get a (potentially) smaller formula at the root from a bigger one.
The rules that "discard" hypotheses do not really break the idea that you only work at the root, because they are not doing anything with those hypotheses, they are just marking them as "globally unnecessary", that is, those hypotheses are used locally in the proof but they are not global hypothesis of the final theorem. Also (but you probably already know this), via the Curry-Howard correspondence, discarding hypotheses is exactly binding variables: discarding occurrences of in a proof of in order to prove is the same as forming the term , in which we are binding certain free variable occurrences (those called ) in the term . The variable becomes "local" to , it is invisible from the outside. Maybe the proof-tree syntax looks weird, but if you see it as a term, it's not so strange after all. The "sequent" formulation of natural deduction mentioned by @Mike Shulman makes explicit which variables are free/which hypothesis have not been discharged, but does not change the idea that you always work at the root, as you "naturally" do in real life.
In order to have only introduction rules (that is, rules that build bigger formulas from smaller ones), sequent calculus has one half of its rules (the right rules) which are exactly the same as the introduction rules of natural deduction, but the other half (the left rules) act on the leaves of the proof tree, rather than the root. This is quite unnatural, you never do this when writing down a proof.
The example given by @Mike Shulman illustrates this very well. Your goal is , so far you proved . To finish your proof, you assume and prove , then assume and prove , and you're happy, QED. This is the elimination rule for disjunction in natural deduction. Notice that the hypotheses and are "local": they are used in those two sub-proofs (you could call them "lemmas"), but they are not hypotheses of your main theorem, which is and which only uses whatever you used to get . That's why you need to discharge them. I find this extremely "natural" :big_smile:
By contrast, sequent calculus does this weird thing where, at the point where you would have written QED above, you instead take your two proofs of , one using the hypothesis and the other using the hypothesis , and observe that you therefore have a proof of using the hypothesis . Only then you may conclude but using a cut rule with your previous proof of . Of course this is perfectly sound, but you would never write things this way!
That being said, sequent calculus is probably the best system for proof theory, because this idea that you only build bigger formulas from smaller ones is extremely powerful and, in particular, matches directly that notion of analytic proof that I mentioned before: every rule of first-order sequent calculus, except cut, has the subformula property, that is, the formulas appearing in the premises are subformulas of the formulas appearing in the conclusion. Since you don't need cut (as mentioned by Mike), you can prove any provable formula by only using the subformulas of ! This has lots of important consequences (for example, the consistency of sequent calculus itself).
I just took the time to watch the replay and it was great. I think it's a great format to invite someone to explain something to you and you try to understand it. It shows the process of learning, and it was apparently fun for everyone involved.
the difference between "natural deduction" and "sequent calculus" lies in the treatment of contexts and cut vs substitution, rather than in how you write the rules on the page.
Agreed! actually I was always told that writing Natural Deduction with Gammas and Deltas is called "ND in Martin-Loef style".
It's been a month, but tomorrow at 9am Paris time we'll be back!
See you there.
Have had to cancel due to a major miscommunication about location :sweat_smile:
Sorry folks (pinging @Ralph Sarkis )
:cry: thanks!
There is a small chance we will be back later today instead, I'll try to give advanced notice if so. (This did not happen)
Assuming @William Troiani remembers, we will be streaming in two hours, at 4pm Paris time. Link as above :)
[Technical issues are happening, namely that Will's laptop is not switching on, which is concerning beyond streaming purposes]
Solved!
This week we talked about Ramsey theory and some related combinatorics (we applied Hall's theorem in the context of Ramsey theory!). There was a stream freeze for 10 mins or more at one point, but hopefully there is still enough audio for it to be entertaining to watch beyond that point.
Hello!
After I mentioned the stream in my update the other day, @William Troiani was keen to bring it back.
We're planning a stream this Tuesday 23rd January, starting at or after 8am CET so that the time zones overlap. It's not prime time, but it's research, so maybe you can get away with joining at work!
This week we will be discussing reading maths papers: what our approaches are and how to do it efficiently. We'll be reading a paper of Lurie, but the main focus will be on the "how" rather than the "what"; I'll try to keep the discussion to paper-reading in general. I hope to see some of you there!
Starting shortly!
I watched part of the recording, and it was great! (I hope to watch more of it when I'm feeling better, health-wise). Thanks for discussing several fairly specific points regarding strategies for reading papers (or books). I'll be interested to try and make use of these. I feel that such strategies are especially relevant for someone like me who is trying to learn math largely through self-directed study.
To take one example, I appreciated the emphasis on trying to understand the "story" a paper is trying to tell, and to understand at least somewhat its claimed justification for existence. Amusingly, this reminds me of my recent efforts to puzzle out a mystery novel I've been working through recently: if one can understand the big picture motivation and goals of the characters (or narrative), it becomes significantly easier to reason about specific actions that the characters take. I suppose it's not too unreasonable to view a math paper as a sort of mystery that one is attempting to puzzle out...
One other point also comes to mind - and I don't know if it was addressed in the stream because I haven't watched the whole recording yet. But in the context of self-study, I find it important to be able to form an estimate of how accessible a paper or book is me at the current time. One of my current self-learning strategies is: try to primarily use the least difficult paper or book that still gets me excited about a given topic, while continuing to cross-reference more difficult (and less difficult) papers/books.
Of course, this may not be an appropriate approach if one's goal is to learn a lot about a very specific thing, as opposed to reading about related things. I'd be curious what strategies or techniques you have found helpful in this direction, or more broadly the criteria you make use of for selecting a book or paper to focus on.
Just watched a bit more and wanted to note a couple more interesting strategies discussed in the stream:
Great stuff!
Thanks so much for watching @David Egolf , it feels great to hear that those ideas landed, and it's also really useful for me to think about the contexts in which people might be attempting to read academic papers.
I guess it's an obvious thought, and people in the video may have discussed it, but I always try to become intimately familiar with some examples of the concepts that a paper is proving things about.
It's not necessary to have lots of examples, but it's important to have enough that it's unlikely some property will hold for all these examples but not in general. In other words, you shouldn't only have "trivial" or "degenerate" examples - the examples should be "generic".
Then, if you know these examples really intimately, you can often just see (in your mind's eye) whether a statement hold for all these examples, and guess that if so, it holds in general.
This technique gives you insight, which is like a second way of knowing, less certain than deductive reasoning, but often broader in scope, and lightning-fast.
Insight also helps you read papers! You evaluate each thing you read against your stock of examples and see how it works in those cases.
Hi folks, I'm not available but @William Troiani will be doing a Unary Operation stream from 8am CET tomorrow (Tuesday 30/01).
He's going to be reading some notes emtitled "Waddler's Theorems for Free", so tune in if you want some more direct insight into his paper reading process!
I'll be back next time to discuss more general article-reading skills. :innocent:
Hi folks. @William Troiani and I are planning another episode of Binary Operation tomorrow. We'll be continuing our "article reading" discussion, focussing on how you can extract (a first approximation of) what you want to learn from a paper in a reasonable amount of time, and then how to efficiently close-read (or at least our approach to that).
Please do tune in tomorrow from 8am CET if you have time!
I won't be able to join :cry:, but have fun! I wanted to suggest trying some AI tools to help reading a paper. I remember this one was very good at paraphrasing (even with papers using weird symbols), but it did not really make things clearer. Curious what you think about it.
In case anyone was considering joining but forgot, we've been chatting about some algebra until now and are only just getting onto paper reading, so it's a good time to hop in :)
For anyone watching the stream recording later, we start with me explaining the contents of a paper of mine that got accepted last week. That's more technical than I would have liked to open the stream, so I would recommend you skip to the cute algebra puzzle starting at 18 mins. After that, skip to 42 mins in for the start of the reading discussion, and to 1 hour 10 mins for us playing around with the AI summarizing tool that @Ralph Sarkis suggested. Spoilers: the AI wasn't very good.
Quite enjoyed the algebra puzzle, and am hoping to watch more when I have more energy! These streams are a lot of fun, and educational too. :smile:
On the topic of studying a paper or a book, there was a brief mention (during the stream) of using LaTeX to create notes. I wanted to mention that my favorite environment for typing up notes that include LaTeX these days is Obsidian.
I like it because:
My work environment ends up looking like this:
obsidian demo
Hi folks! Short notice, but we will start streaming in around 15 mins. See you there ;)
This week we talked about a surprising connection between axioms of set theory and functors for about 40 minutes before getting onto what may be our last reading discussion:
One possibility is discussing preparing a talk. There's lots to cover there, from structure to slide layout to time-saving tricks to performance advice, so it seems promising.
On the technical side, someone expressed an interest in topos theory. There was also another suggestion, I think, although I seem to have forgotten it :stuck_out_tongue_wink:
What would you like to see us do? Ideally we would like to combine suggestions for form and content, as we have done with the reading streams, where we were examining both how to read effectively/efficiently while also being interested in the maths that we were reading (it doesn't have to be research level, by the way). Do let us know what you would like to see!
Hi folks, it's Tuesday morning again and in spite of a late start we'll be streaming from less than 10 mins from now ;)
Hope to see you there!
Today we discussed a paper of mine (in collaboration with @Marie Kerjean and @Valentin Maestracci ) that is being submitted to FSCD, then had a more general discussion about the risks of insularity in maths. For me, this discussion was motivated by my perpetual feeling of ignorance regarding the broader motivations in logic. If you have some preference on what we do/discuss next week, let us know :)
disclaimer: please do not take our reported speech of other mathematicians as authoritative, we did not consult anyone ahead of making this stream :sweat_smile: hopefully we can have a more experienced logician to tell us about their motivations on the stream in the future?
Tomorrow (Tuesday) morning from 8am CET we will be discussing how to prepare a talk. We will select a result from a Category Theory textbook as material that we want to present to an audience, and see what is involved in the process from there.
We hope to see you there!
(Starting now!)
We discussed the value of regular research group meetings for PhD students for the first twenty minutes, then moved onto a series of slides on 'How to prepare a talk' by Marjorie Bachelor for around fifty minutes, and finally we spent the last ten minutes or so selected a result from Emily Riehl's Category Theory in Context that we will go through the process of preparing a talk about next time (unless another topic takes our interest in the mean time).
Speaking of next time, we need to change the day/time that we stream due to Will's teaching commitments. This will probably mean no stream next Tuesday in a first instance. I'll post here when we decide when to stream next!
moved onto a series of slides on 'How to prepare a talk' by Marjorie Bachelor for around fifty minutes,
thanks for this, I know Marj, but didn't know about the notes!