You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Rongmin Lu said:
sarahzrf said:
Rongmin Lu said:
Anecdotally, many programmers I know hate mathematics. For some reason, the way of thinking that we take for granted is very alien and uncomfortable to them, whereas the way of thinking that pays off in programming comes more naturally for them.
as a programmer, can second this anecdote, there's a lot to unpack
Please unpack, the floor is yours.
god im not sure where to start
i mean, well, to begin with, i personally think that programming shares a ton cognitively with math, so im not entirely on board with the statement about ways of thinking
but there's definitely something to it, it's just nuanced
i almost wanna say that some of the differences are aesthetic, but that's not quite what i mean
maybe i mean like... a lot of the thinking you do in programming is the same kind of thinking you do in math, the thing that is alien to many programmers is the way of seeing what it is that you're doing, maybe?
anyway that's only part of it
that entirely aside, there's also a certain amount of, like, blind hostility toward math among many programmers, as you said in less colorful language
So thinking is a cospan of mathematics and computer science?
:)
im tempted to say that some of that blind hostility is a symptom of a bigger pattern of sort of like
There's a lot of hostility toward academic investigation of programming among programmers.
yeah
maybe not quite anti-intellectualism, but anti-academicism sounds right
not all of it is unfounded, but
there's this trope among many programmers of Everything Getting Invented By Hackers Outside Of The System, roughly
"hackers" in the traditional sense, not as in "cracker", of course :upside_down::upside_down::upside_down:
you also have the fact that many programmers are autodidacts
which synergizes with that trope tbh
Right. That's probably a big part.
Also just the general artificial divide, which results in some of the opposite phenomenon, too.
i guess broadly there are a lot of programmers who have overly-universally-applied skepticism of "intellectual authority"
Well, they're all the smartest person ever to live.
100% true
sarahzrf said:
i mean, well, to begin with, i personally think that programming shares a ton cognitively with math, so im not entirely on board with the statement about ways of thinking
Well, duh, programming is just applied maths, amirite? :smiling_devil:
sarahzrf said:
maybe i mean like... a lot of the thinking you do in programming is the same kind of thinking you do in math, the thing that is alien to many programmers is the way of seeing what it is that you're doing, maybe?
I'd put it this way: you can compile programs, but you can't compile proofs (yet).
yes you can
Not in the way most proofs are written today.
actually fuck youre right i cant compile my proofs :cry:
image.png
OTOH, programs have to be compiled, and they have been since time immemorial.
That's the difference. When programmers push programs to production, there's a defined QA process.
When mathematics push proofs to journals... they just keep pushing until one of them gives.
You know how the "peer review" process works, I hope? How does that compare with the code review process?
if programs have to be compiled then explain this image.png
There's probably a reason why scripts-as-a-service hasn't taken off yet.
https://news.ycombinator.com/item?id=1982489
sounds like taking off to me
hmm, i'm just being flippant :)
Alright, let's be pedantic: programs have to be processed into workable machine code. Whether you call this interpreting or compiling is just semantics.
/me takes a deep breath
FIRST of all
no, but seriously
peano arithmetic can trivially be compiled to itself, but it cannot interpret itself
an interpreter is something that performs an act of dequotation
Nobody writes proofs in Peano arithmetic. It's like the machine code of mathematics.
a compiler merely transforms syntax
okay, then ZFC can trivially be compiled to itself, but it cannot interpret itself
The point being: when code breaks, we will (eventually) know.
When proofs break, nobody knows, until and unless somebody actually sits down and reviews the proofs.
Have you folks seen MetaMath from Princeton?
distantly
anyway i do not disagree with you on the original point you wanted to make about programming dealing with a fundamentally rigid being
but i think you've gone off the rails :thinking:
It compiles math. 2+2=4 is around theorem 2700. :slight_smile:
yikes
you can prove 2+2=4 in coq from absolute scratch in probably 10 lines
most of which are defining equality and natural numbers and addition
Daniel Geisler said:
It compiles math. 2+2=4 is around theorem 2700. :)
Remember when I said something about several hundred pages of Principia Mathematica being needed to prove 1+1=2? Yeah...
sarahzrf said:
you can prove 2+2=4 in coq from absolute scratch in probably 10 lines
That's an improvement.
Anyway, my point was: programmers operate on a more concrete level than pure mathematicians, even though they are both "just" doing the same thing.
the actual proof of 2+2=4 is gonna be like
Definition two_four : 2 + 2 = 4 := eq_refl.
i never said they were "just" doing the same thing
sarahzrf said:
i mean, well, to begin with, i personally think that programming shares a ton cognitively with math, so im not entirely on board with the statement about ways of thinking
All right, you said this.
It's fascinating seeing the theorems that are chosen over similar theorems. They tweak math.
sarahzrf said:
maybe i mean like... a lot of the thinking you do in programming is the same kind of thinking you do in math, the thing that is alien to many programmers is the way of seeing what it is that you're doing, maybe?
And this.
well, you're not putting it to the same end, even if you're thinking in many of the same ways
but look
Exactly, and that's kind of my point. The end is more concrete for programmers, and more abstract for mathematicians.
ah, sure :)
I'm glad we can agree on this.
I don't know about the coend though.
the coend is for programming
that's why it shows up in optics
Funny how I had a hard time explaining to some programmers what a coend is then...
I'm an autistic yogi who studies meta-cognition and has intense visualization synthesia that might mean I'm a savant. I've also done large amounts of physics, math, programming and psychology. Ask me anything. :slight_smile:
And the point I was trying to make is what drove Voevodsky to develop HoTT. He was terrified to find out that there was a flaw in his work and nobody had discovered the error, even though plenty of people had apparently read it.
Source: https://www.ias.edu/ideas/2014/voevodsky-origins
here is my question: can metamath do this image.png
you (the metamath) just did :P
:O
\is about to go to sleep\ \starts reading the shaky foundations topic\ hmm. somehow this is not helping me sleep.
i apologize for the forall _ : A,
stuff, coq's ordinary A -> B
notation is actually defined in the base library as syntactic sugar for that
yes thank you!! being shot downwards into my bed by Samuel L. Jackson’s authoritative voice is just the ticket :)
Metamath's strength is the system is a small program that load all the basic rules of math. I believe in a 80%, 20%. If one technology is at 80% and another technology is at 20%. In that case I learn and use both. Cog is awesome but it compliments MetaMath.
MethMath :flushed:
is that what erdos did
sarahzrf said:
is that what erdos did
Pretty much, yeah
Shouldn't write with Sisters of Mercy cranking.
Maybe we should change the Zilup turtles to pointy-headed managers. I think it is pointy-headed managers all the way down.
Daniel Geisler said:
Metamath's strength is the system is a small program that load all the basic rules of math. I believe in a 80%, 20%. If one technology is at 80% and another technology is at 20%. In that case I learn and use both. Cog is awesome but it compliments MetaMath.
I think what's lacking with these systems is that it requires people to learn a new syntax. This is something that working mathematicians are not prepared to do at the moment. There's a need to couple systems like this with a natural-language-processing module.
Daniel Geisler said:
Maybe we should change the Zilup turtles to pointy-headed managers. I think it is pointy-headed managers all the way down.
Great idea. I seem to recall someone has written a paper on using CT to model PERT charts, but I can't find it at the moment.
Rongmin Lu said:
Daniel Geisler said:
It compiles math. 2+2=4 is around theorem 2700. :)
Remember when I said something about several hundred pages of Principia Mathematica being needed to prove 1+1=2? Yeah...
Though to be fair: "One of the reasons that the proof of 2 + 2 = 4 is so long is that 2 and 4 are complex numbers—i.e. we are really proving (2+0i) + (2+0i) = (4+0i)—and these have a complicated construction (see the Axioms for Complex Numbers) but provide the most flexibility for the arithmetic in our set.mm database." (http://us.metamath.org/mpeuni/mmset.html#trivia)
Alexis Hazell said:
Rongmin Lu said:
Daniel Geisler said:
It compiles math. 2+2=4 is around theorem 2700. :)
Remember when I said something about several hundred pages of Principia Mathematica being needed to prove 1+1=2? Yeah...
Though to be fair: "One of the reasons that the proof of 2 + 2 = 4 is so long is that 2 and 4 are complex numbers—i.e. we are really proving (2+0i) + (2+0i) = (4+0i)—and these have a complicated construction (see the Axioms for Complex Numbers) but provide the most flexibility for the arithmetic in our set.mm database." (http://us.metamath.org/mpeuni/mmset.html#trivia)
This is the point at which many working mathematicians will throw their hands up and say: "Formalise maths? Over my dead body!"
2+2=4 MetaMath page http://us.metamath.org/mpeuni/mmset.html#trivia
Proof is 184 levels at it's deepest. There will be a test.
Daniel Geisler said:
2+2=4 MetaMath page http://us.metamath.org/mpeuni/mmset.html#trivia
Proof is 184 levels at it's deepest. There will be a test.
Yeah nah, I'll pass.
And that's the problem.
MetaMath is a bastion of set logic with it's 27000 proofs. My rhetorical question is wouldn't a category version result in a significantly terser system?
Rongmin Lu said:
Daniel Geisler said:
Maybe we should change the Zilup turtles to pointy-headed managers. I think it is pointy-headed managers all the way down.
Great idea. I seem to recall someone has written a paper on using CT to model PERT charts, but I can't find it at the moment.
Ah, found it. It was a post by Simon Willerton on the n-Category Cafe, "Project Scheduling and Copresheaves". Somehow this never made it into a paper by him, although someone reached out to him about it last year.
Daniel Geisler said:
MetaMath is a bastion of set logic with it's 27000 proofs. My rhetorical question is wouldn't a category version result in a significantly terser system?
It depends on what you mean by "terse". As sarahzrf has pointed out above, Coq seems to be terser. But I think you still need a database of results. 27000 proofs are probably not enough.
i dont know that coq is terser
it's a richer system
I suspect that Cog and CT have a larger "operating system". What would be interesting in knowing which could utilize resources more effectively.
Rongmin wrote:
He was terrified to find out that there was a flaw in his work and nobody had discovered the error...
Nobody bothered to find the specific error, not even Simpson (who wrote a paper showing Voevodksy's result was wrong), but nobody believed the result was correct, which is why nobody used it. If it had been right it would have been so wonderful everyone would have been using it a lot!
Really it was Voevodsky's job to find the mistake in his proof, given that nobody else believed it.
But if this led him to invent HoTT, I guess it all worked out for the best!
John Baez said:
Nobody bothered to find the specific error, but lots of us knew the result was wrong, which is why nobody used it. If it had been right it would have been so important everyone would have been using it a lot!
Really it was Voevodsky's job to find the mistake in his proof, given that nobody else believed it.
But if this led him to invent HoTT, I guess it all worked out for the best!
See, that's kind of the problem. "Nobody bothered".
That's what I was trying to point out to sarahzrf to be the difference between programmers and mathematicians: the former tends to have a better feedback loop than the latter.
In an environment with a good workflow, code is scrutinised before being put into production:
And in today's cloud computing world, a bug is a security issue that can bring down your business, so QA is very important.
Compare this to the maths world. "Really it was Voevodsky's job to find the mistake in his proof"? This, for him, was an unknown unknown. How was he to know, if nobody would tell him, because "nobody bothered"?
The first moon landing had a serious bug found right before liftoff, a swapped sign resulted in negative gravity.
Now imagine if "nobody bothered".
From the article Voevodsky wrote:
This story got me scared. Starting from 1993, multiple groups of mathematicians studied my paper at seminars and used it in their work and none of them noticed the mistake. And it clearly was not an accident. A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.
There's the deference to authority --- or as John Baez suggested, "nobody bothered" --- that eventuated. "Nobody bothered" because "surely" someone of Voevodsky's calibre and status would have already figured out that there was a problem.
There is a certain objectivity about programmers that few mathematicians had. When I walk into a new programming site the reality of who I am is sorted out in a matter of a day, maybe a few days at the most. Now who am I as a mathematician? That has many more degrees of freedom.
Daniel Geisler said:
That has many more degrees of freedom.
It does also mean that mathematicians enjoy more freedom, in that they're freer of scrutiny. I think that's perhaps why many are hostile to the formalisation of maths, amongst many other reasons.
A mathematician is required to live a life of continual abstraction, generalization and simplification.
Daniel Geisler said:
A mathematician is required to live a life of continual abstraction, generalization and simplification.
A pure mathematician, maybe. And this is a bit of a caricature as well, one that the culture seems happy to perpetuate.
At the best, there should be no dichotomy between math, physics and computer science. Is computer science a cospan of math and physics?
no
computer science doesnt have that much in common with physics as far as im concerned
Rongmin Lu said:
There's the deference to authority --- or as John Baez suggested, "nobody bothered" --- that eventuated. "Nobody bothered" because "surely" someone of Voevodsky's calibre and status would have already figured out that there was a problem.
he was suggesting the opposite :thinking:
he said that nobody bothered finding the issue because they knew it was wrong so why care the specific reason
sarahzrf said:
he said that nobody bothered finding the issue because they knew it was wrong so why care the specific reason
Then he goes on to blame Voevodsky for not finding it out.
:)
John Baez said:
Really it was Voevodsky's job to find the mistake in his proof, given that nobody else believed it.
That's what he wrote, sarahzrf.
that means something different from what you're saying
Please explain?
you're talking about an attitude that it should be voevodsky's job to make sure there arent any mistakes, so that other people dont end up thinking it's right because they missed the mistake
but he's talking about it already not being seen as right, and it being his job to work out the issue
sarahzrf said:
but he's talking about it already not being seen as right, and it being his job to work out the issue
The thing was, according to Voevodsky, he spent many years not knowing that it wasn't right, and nobody told him.
well, take it up with john, not me :shrug:
Rongmin Lu said:
This story got me scared. Starting from 1993, multiple groups of mathematicians studied my paper at seminars and used it in their work and none of them noticed the mistake. And it clearly was not an accident. A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.
As I quoted above.
apparently he has a different impression
programmers almost always work with tangibles: almost every type/set they work with is either finite or countably infinite, and usually they can serialize it to a nice syntax or to binary; all the functions they work with are computable, sort of by definition; and of course there’s a constantly dialogue with the computer, not only correctness checking, but computationally – so if you don’t know how something behaves, you can literally run it on different inputs and see how it works.
of course the tradeoff is that sometimes programmers work with things that are dynamic, because they are mutable or have hidden state etc., so that ruins a lot of the nice properties: suddenly it’s a lot harder to serialize it, to probe it, to make sense of it, etc.
but I use functional programming so I rarely have to worry about that :wink:
sarahzrf said:
well, take it up with john, not me :shrug:
I'm taking it up with him.
almost every type/set they work with is either finite or countably infinite, and usually they can serialize it to a nice syntax or to binary
very false :scream:
But you just joined the convo.
Nicholas Scheel said:
programmers almost always work with tangibles: almost every type/set they work with is either finite or countably infinite, and usually they can serialize it to a nice syntax or to binary; all the functions they work with are computable, sort of by definition; and of course there’s a constantly dialogue with the computer, not only correctness checking, but computationally – so if you don’t know how something behaves, you can literally run it on different inputs and see how it works.
Yup, you've just described how the feedback loop is faster for programmers than for mathematicians.
but the stuff about there being pushback & a dialogue is extremely on point
i meant to react earlier when rongmin said something about feedback loops but i got distracted
im pretty sure the feedback loop is why im able to hyperfocus on programming semireliably when it's a project im particularly interested in, whereas im rarely able to hyperfocus on math even when im very interested in it (actually im not sure how often i do but definitely less than programming)
@sarahzrf care to explain your reaction re the sizes?
:upside_down:
Integer -> Integer
is uncountable, for example
and not very serializable
but programming functions are computable, so I think it’s actually countable again ... meta-theoretically countable, or pseudo countable, or something
also Dhall has function serialization, but I digress
everything can be meta-theoretically countable
skolem strikes again
Size issues are really not the problem here. Habits and ways of doing things are.
actually, here is a nice puzzle for you
since you're so convinced that this type is actually countable
well, part of the reason I brought up size, is that (excluding functions) most data that programmers work with has serialization and even decidable equality, and that ties back into the computational aspect
let's take, i dunno, MLTT w/ natural numbers and equality, or something
and add a constant code : (nat -> nat) -> nat
to it
and some rules so that an application of this constant to a value in an empty context reduces to the gödel number of the term it is applied to
like imagine a mathematician just asking their computer what the genus of an arbitrary manifold is – that could never happen, the set of manifolds is way too huge to get computational results from it ...
then a constant code_spec : forall f g : nat -> nat, code n = code m -> f = g
with computational behavior that... yadda yadda, this stuff probably cant be made to work particularly sanely in practice but there's a moral i want to get at
Nicholas Scheel said:
like imagine a mathematician just asking their computer what the genus of an arbitrary manifold is – that could never happen, the set of manifolds is way too huge to get computational results from it ...
Yeah, well, if you ask a human what the genus of an arbitrary manifold is, and don't give any information, you won't get anything either.
now: it is provable in mltt, by the standard cantor argument, that there is no injection (nat -> nat) -> nat
but we've just added one to the system
so if we apply the proof to it, we get a proof of false
sigh I’m still up
but we gave computational behavior to all of the pieces
so what gives?
what happens when we beta-reduce the proof of false?
riddle me this, if u think nat -> nat
is so countable in a programming language
anyway I think @Nicholas Scheel nonetheless has a point: mathematical terms are thought of, in general, as referring to abstract objects that could not in principle be stored in a computer.
youd be suprise
actually, when you compile a program, the number of functions is finite :troll:
Nicholas Scheel said:
of course the tradeoff is that sometimes programmers work with things that are dynamic, because they are mutable or have hidden state etc., so that ruins a lot of the nice properties: suddenly it’s a lot harder to serialize it, to probe it, to make sense of it, etc.
That's another thing I should also point out. Pure mathematicians, at least, work with things that are mostly static. CT is most definitely static: you may be able to deal with dynamic objects, but only with great difficulty.
Nicholas Scheel said:
actually, when you compile a program, the number of functions is finite :troll:
defunctionalization time
T Murrills said:
anyway I think Nicholas Scheel nonetheless has a point: mathematical terms are thought of, in general, as referring to abstract objects that could not in principle be stored in a computer.
That is soooo 20th century.
Rongmin Lu said:
Nicholas Scheel said:
of course the tradeoff is that sometimes programmers work with things that are dynamic, because they are mutable or have hidden state etc., so that ruins a lot of the nice properties: suddenly it’s a lot harder to serialize it, to probe it, to make sense of it, etc.
That's another thing I should also point out. Pure mathematicians, at least, work with things that are mostly static. CT is most definitely static: you may be able to deal with dynamic objects, but only with great difficulty.
brouwer is spinning violently in his grave :weary:
also lawvere probably
and he's not even dead yet!
Please explain?
tell me most mathematicians don’t think of a topological space as actually having an uncountable number of points
Especially with Brouwer.
T Murrills said:
tell me most mathematicians don’t think of a topological space as actually having an uncountable number of points
Nope. It's just a blob. Usually a 2-dim one.
It’s too late for me to tell if you’re trolling me or not lol
Absolutely trolling. But also true.
because like...you’re not wrong
synthetic mathematics time
See Jules Hedges tweet here: https://twitter.com/_julesh_/status/1241481443886456832
So if it's just a blob, we can store it on Github
booo
dont put blobs in git
well, you raise an interesting point: programmers expect any arbitrary object (of a certain type) to be storable as data, which basically means its countable; physicists expect an object to actually exist in the universe, somewhere (possibly inside a black hole); and mathematicians almost always work with descriptions of objects (not encodings of objects), and a large part of the trick is relating different descriptions of the same object, which requires proofs of arbitrary complexity, not just computation
like......the thing actually thought of does not have uncountably many points...but it is thought to be something which does? ya feel? waits for sarah to say ‘intension’
sarahzrf said:
Point taken, but intuitionism leads to constructivism, which leads to programming, which is, as we've discussed above, not exactly mathematics.
it’s the classic argument of whether 19374829^163947^16373^16372 actually exists. Mathematicians say yes, I just wrote down a description of that number, and programmers say no, you can’t represent it in binary.
mathematicians dont all say yes
Classical mathematicians
some of them, like edward nelson, have integrity
not finitists, sorry forgot to clarify
not talking about Egyptians, either
Nicholas Scheel said:
it’s the classic argument of whether 19374829^163947^16373^16372 actually exists. Mathematicians say yes, I just wrote down a description of that number, and programmers say no, you can’t represent it in binary.
You wrote down a program. If we execute that program with enough resources, it'll give you that number.
edward nelson was anti-finitism
how much resource do you need for that, rongmin
:)
my thoughts lean towards “people who say numbers represented in binary actually exist” are also wrong in a very important sense
sarahzrf said:
how much resource do you need for that, rongmin
A lot. Mathematicians just use lazy evaluation extensively.
I N T E N S I O N
inb4 “Haskell programs have no side effects bc nobody ever runs them”
I N T E N S I O N I N T E N S I F Y I N G
Nicholas Scheel said:
inb4 “Haskell programs have no side effects bc nobody ever runs them”
Monads
yall have very little of my time left before i insist that you get some experience with a type-theory-based proof assistant before discussing the relationship between mathematics and programming with me
i've been there. i don't think it informs this debate very much.
mostly because it's not about "mathematics and programming" for me, it's "the cultures of doing mathematics and programming".
which is why i said size issues are not relevant here.
I have that experience but I think it mumbles propagates a map/territory conflation or something
that said I do think it informs more philosophical debates on the differences between “math” and “programming”
hmm, im mostly whining about some of what im seeing about what is possible to compute with
T Murrills said:
I have that experience but I think it mumbles propagates a map/territory conflation or something
that said I do think it informs more philosophical debates on the differences between “math” and “programming”
Most people conflate the map with the territory because they haven't been there or haven't thought much about the territory.
Nicholas Scheel said:
well, you raise an interesting point: programmers expect any arbitrary object (of a certain type) to be storable as data, which basically means its countable; physicists expect an object to actually exist in the universe, somewhere (possibly inside a black hole); and mathematicians almost always work with descriptions of objects (not encodings of objects), and a large part of the trick is relating different descriptions of the same object, which requires proofs of arbitrary complexity, not just computation
im looking at u
the map is not the codomain!!!
objects != functions, from the programmer’s perspective
I mean, you can compute about anything, you just can’t necessarily compute anything in the sense of what a mathematician might assume a thing is, sorta, and I think that distinction...is here (idk, [end of sentence])
Nicholas Scheel said:
objects != functions, from the programmer’s perspective
How do you store functions then? Not as "data"?
actually, I might want to take that back: Turing machines are countable, right?
turing machines are not sets
unless you're using set-theoretic foundations like an animal
the set of Turing machines I assume @Nicholas Scheel means
on a finite alphabet? yeah i think so
but a turing machine is an intension, not an extension
hell yeah there it is!!!!
rest assured, I do not mean to ask questions that don’t typecheck, I just am generous with implicit verbiage
there is no computable surjection (nat -> bool) -> nat
, but there's certainly a computable surjection □(nat -> bool) -> nat
what’s box here, I’m used to seeing it as a model operator?
where □ is the "intension"/"some code for" modality
ah nice!
check this out T https://arxiv.org/abs/1703.01288
this kind of interpretation of □ is a curry-howard transpose of provability logic
@sarahzrf omgomgomg. this is great.
yeah :)
i notiecd just earlier tonight that the author is on this server actually
(altho i dont think that paper originates that interpretation of □? im not sure yeah it calls it an "old piece of programming lore" in the intro)
I think it’s related, but I never quite understood the technical details of Andrej’s work here: is he assuming that the functions are computable? http://math.andrej.com/2014/05/08/seemingly-impossible-proofs/
nope!
conats are omniscient in pretty bare constructive foundations
I also think the intensional/extensional dichotomy is but a precursor to some deeper philosophical understanding of relative intension/extension but
oo https://www.cs.bham.ac.uk/~mhe/papers/negative-axioms.pdf
aw yeah :) I will read this but quick question: I never got why MLTT is called “intensional”. what is the connection to intension, exactly?
it's in reference to the equality
the equality in standard mltt is an intensional one
you can only prove that things are equal if they are intensionally equal
well, i say "things" when i mean "functions" mostly
and by intensionally equal we mean...equal by definitional equality?
rrrrroughly
but here he’s assuming that functions are computable/continuous? http://math.andrej.com/2007/09/28/seemingly-impossible-functional-programs/
yes, there he is
gets more confusing when you have free variables in scope
Like, the prop equality holds only if a definitional equality holds? by definitional equality I mean something you can derive as a definitional equality judgment in the. you know. the inference system.
but yeah we have that (there exists a term such that ) iff ()
ah there it is cool cool
if you put a Γ in front of the ⊢ then im not prepared to stake anything on it
anyway this is actually kind of a problem
if you want your defeq to be decidable, it implies that your propeq for closed terms is decidable
which means that funext cannot be in your system, since propeq extensional eq of functions nat -> nat
say is not decidable
unless you wanna throw out canonicity or subject reduction or something, in order to break the equivalence i wrote
honestly, if you don’t have funext, you’re not doing math
okay, that feels like...an ad-hoc port of the concept of intensionality into Math if you know what I mean
Ah I’m gathering function extensionality introduces undecidability? think I’ve heard that, but not sure why
guess im not doing math then
see here https://github.com/coq/coq/wiki/extensional_equality
thank u @Conor McBride ^__^
anyway i shuold really fucking be going to sleep it's 4:30am
:zzz:
In general, this convertibility check must work for any two extensionally equal functions and that is undecidable in general.
this is what I don’t get but it’s probably some diagonal (lawvere) thing I’m guessing
no i just mean
go try writing an Eq
instance for Integer -> Integer
in haskell
see how it goes
is it just
if you wanna be really concrete
you gotta check all the integers?
set one of them to constant 0 and the other to 0 as long as your favorite turing machine has not halted
oh.
yeah. that’ll do it, lol
ok well I should sleep too
It is also 4:30
(here)
EST represent
:zzz:
:zzz:
I abandoned EST two weeks ago ... has my sleep schedule changed? nope!
Rongmin Lu said:
Rongmin Lu said:
This story got me scared. Starting from 1993, multiple groups of mathematicians studied my paper at seminars and used it in their work and none of them noticed the mistake. And it clearly was not an accident. A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.
As I quoted above.
In that quote Voevodsky speaks of a different 1993 paper, in which he himself found a mistake a few years later. @John Baez was talking about the Kapranov-Voevodsky 1991 paper that was disproved by Simpson. From what I know, it is true that the latter had not received much attention.
But it is true that Voevodsky seems to have thought that the 1991 paper was correct for many years (even doubting the Simpson counterexample for a while)...
The 1991 paper is quite an “extraordinary” example in that it really contains no proper proofs, only sketches of arguments. So I agree with John that it would have been much to ask mathematicians who were already doubtful of the result to first reconstruct the supposed proof, and then find a mistake. :)
EDIT: while I was writing this lecture, Amar made some of the same points, in short and suitable ways :-)
It should be noted that Voevodsky talks about two different mistakes, of different mathematical nature, and occurring in very different sociological contexts.
One is a mistake made in a technical lemma in the field of motivic cohomology, a field involving top mathematicians at top universities, Voevodsky himself being a top expert getting later the Fields medal for his work -- everything top. This is the case mentioned where people held seminars all over the world, but did not detect the mistake, accepting the technical result without checking the proof, trusting Voevodsky's authority, and using it in subsequent work. Voevodsky himself found the gap and provided a correct proof. In the end it was just a technical mistake, but potentially affecting many people's work.
The other mistake is his result with Kapranov about strict \infty-groupoids realising all homotopy types. That's a very different context: it was around 1990, years before people like Baez-Dolan, Street, Batanin, Tamsamani and Simpson really got higher category theory started. It seems that not so many people took notice at the time, and it was published in Cahiers, a very good journal, but probably considered 'obscure' by most people outside category theory. Those who did take notice did maybe not fully believe the result: it was already known that the result could not be true with standard geometric realisation. The subtle point was that the KV argument used a non-standard geometric realisation. I suppose the reaction at the time was: 'maybe it is true, maybe not, but it is not about the standard geometric realisation we care about'.
Simpson's preprint contradicting Kapranov-Voevodsky came in 1998. He was not able to debug the KV proof and pinpoint a faulty deduction, which is difficult because the paper is very sketchy. But what he did was much better (unfortunately not mentioned in the various Voevodsky interviews): he found the reason it could not work, namely a problem with strict units, hence showing that the mistake is a conceptual one, not just a technical one. Simpson also conjectured that the mistake could be fixed if working with weak units instead of strict units.
Voevodsky said he was not convinced about the mistake until 15 years after Simpson's paper. One reason is of course: could Simpson's paper be trusted? In fact Simpson at that time was already experiencing all the trouble Voevodsky describes: that his papers were too difficult to check fully. (In fact, one of Simpson's main papers on higher categories [A closed model structure for n-categories, internal Hom n-stacks and generalized Seifert-Van Kampen (1997)] had a gap, that it took a 300-page PhD thesis to fill [Péllissier, 2002].) Simpson worked ten years on formalising proofs in Coq (long before Voevodsky), hoping that he would one day be able to certify his own papers. (He came as far as giving a fully certified proof of the Gabriel-Zisman localisation theorem in simplicial homotopy theory, and suggested that this was probably the first proof ever! -- Gabriel and Zisman themselves admit that the details are too unwieldy to write down.)
Coming back to the KV paper: I worked several years on Simpson's conjecture, and tried hard to understand the KV ideas, but ended up approaching the conjecture with other methods, and eventually drifted to other problems. I did not get deep enough into their proof to understand what went wrong. But now, over the past few years, Simon Henry has made important inroads to the theory, and proved a version of the conjecture. His approach follows the KV idea, but uses a lot of modern machinery. I don't know if he can pinpoint the precise mistake in KV, but in any case Henry's actual results are of course more important than the historical curiosity of what went wrong in the KV proof.
Oops, that's probably over the page limit :-( And sorry for the full stops. I hope it does not sound too much like a lecture.
sarahzrf said:
maybe i mean like... a lot of the thinking you do in programming is the same kind of thinking you do in math, the thing that is alien to many programmers is the way of seeing what it is that you're doing, maybe?
I think the style/type of maths that's taught before university, at least in the UK, trains people to be good programmers in this sense; the young people who are "good at maths" (in terms of exam success) are typically those that manage to internalise algorithms and understand how to solve the maths problems they're presented with rather than to understand what it is they're actually doing. Perhaps that's a cynical take on the situation, though.
sarahzrf said:
actually fuck youre right i cant compile my proofs :cry:
image.png
"Error: Pattern(omniscience(negd o P)) was not completely instantiated..." you almost got there, @sarahzrf , you almost made all academic pursuit irrelevant :rolling_on_the_floor_laughing:
Nicholas Scheel said:
it’s the classic argument of whether 19374829^163947^16373^16372 actually exists.
Funnily enough I was just wondering exactly that myself. What are the chances? :rolling_on_the_floor_laughing:
sarahzrf said:
everything can be meta-theoretically countable
Actually everything infinite has a countable model, even better! :slight_smile:
Amar Hadzihasanovic said:
This story got me scared. Starting from 1993, multiple groups of mathematicians studied my paper at seminars and used it in their work and none of them noticed the mistake. And it clearly was not an accident. A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.
In that quote Voevodsky speaks of a different 1993 paper, in which he himself found a mistake a few years later. John Baez was talking about the Kapranov-Voevodsky 1991 paper that was disproved by Simpson. From what I know, it is true that the latter had not received much attention.
I hate chatrooms... :sweat_smile:
Here's what I wrote:
And the point I was trying to make is what drove Voevodsky to develop HoTT. He was terrified to find out that there was a flaw in his work and nobody had discovered the error, even though plenty of people had apparently read it.
I know it was late at night his local time --- I was not expecting him to engage --- but I cited the very article that contained the quote about the 1993 paper. The article made it clear there were at least two papers where Voevodsky goofed. That was why I was so baffled when I responded like that.
Well, it is a nice little case study of how academia really works. Also, chatrooms are the worst (and best, arrrrghh...)
Amar Hadzihasanovic said:
But it is true that Voevodsky seems to have thought that the 1991 paper was correct for many years (even doubting the Simpson counterexample for a while)...
More reason in favour of the formalisation of mathematics. Again (because I've said this so many times elsewhere), not asking mathematicians to write code, just wishing that the tech will catch up to make the user experience more palatable.
Joachim Kock said:
EDIT: while I was writing this lecture, Amar made some of the same points, in short and suitable ways :-)
Yes, and I realised, as Amar had pointed out, that John had thought that I was talking about the 1991 paper, when I had meant the 1993 paper.
Amar Hadzihasanovic said:
The 1991 paper is quite an “extraordinary” example in that it really contains no proper proofs, only sketches of arguments. So I agree with John that it would have been much to ask mathematicians who were already doubtful of the result to first reconstruct the supposed proof, and then find a mistake. :)
Yeah. And again, Carlos Simpson did write a paper in 1998 showing what everyone believed, namely that the result of the 1991 paper was false.
So, I think math was working just fine in this episode, albeit a bit slowly: if someone writes a sketchy paper with a wrong-looking result in a fairly esoteric area, other people aren't going to drop what they're doing and try to make the arguments precise in order to locate a flaw: most of them will just ignore this paper.
It's a bit like if a programmer released some hard-to-read code that claims to do something that's pretty obviously impossible: will other programmers spend time looking for a bug?
But okay, enough about that. What was the 1993 paper that Voevodsky was talking about, in which he himself found a mistake? Oh, I see @Joachim Kock explained it:
One is a mistake made in a technical lemma in the field of motivic cohomology, a field involving top mathematicians at top universities, Voevodsky himself being a top expert getting later the Fields medal for his work -- everything top. This is the case mentioned where people held seminars all over the world, but did not detect the mistake, accepting the technical result without checking the proof, trusting Voevodsky's authority, and using it in subsequent work. Voevodsky himself found the gap and provided a correct proof. In the end it was just a technical mistake, but potentially affecting many people's work.
While writing large amounts of complex code I inevitably find bugs. This makes me feel that mathematicians are overly optimistic regarding the perfection of their work.
I always feel computer scientists overestimate how much mathematicians think their work is perfect, or even care. It's not like anyone is gonna die if the proof of some esoteric conjecture later on turns out to have a flaw. If the result is important enough, people will keep looking at it and probably eventually find the mistake.
In really important subjects, like politics or economics, everything is and always has been a complete shitshow. It's sort of weird that people get freaked out by mistakes in proofs.
Naïve set theory had a pretty massive bug, but a small team of hackers worked intensively on the issue and were able to roll out a patch.
More mathematics simplifies older work than develops new work. Consider how current proofs of Fermat's Last Theorem are much shorter than the original proof.
Morgan Rogers said:
sarahzrf said:
maybe i mean like... a lot of the thinking you do in programming is the same kind of thinking you do in math, the thing that is alien to many programmers is the way of seeing what it is that you're doing, maybe?
I think the style/type of maths that's taught before university, at least in the UK, trains people to be good programmers in this sense; the young people who are "good at maths" (in terms of exam success) are typically those that manage to internalise algorithms and understand how to solve the maths problems they're presented with rather than to understand what it is they're actually doing. Perhaps that's a cynical take on the situation, though.
this is the opposite of what i mean
memorizing algorithms does not make you any better at programming than it does at math
in programming more than in math, you can't get away with not understanding what it is that you're doing, because you can't fudge the details as much when you don't get what they mean
John Baez said:
I always feel computer scientists overestimate how much mathematicians think their work is perfect, or even care. It's not like anyone is gonna die if the proof of some esoteric conjecture later on turns out to have a flaw. If the result is important enough, people will keep looking at it and probably eventually find the mistake.
In really important subjects, like politics or economics, everything is and always has been a complete shitshow. It's sort of weird that people get freaked out by mistakes in proofs.
There was a good question on MO about how serious would it be to find an error in math. The top answer was: 'meh'. Because it's really hard for a mathematical subject to prove a conceptually wrong result and go on for a long time without completely falling apart. Usually errors are formal and not conceptual.
Moreover, unlike programming, math floats on a sea of informal abstraction which is flexible enough to avoid many of the complications of coding (e.g. there's no buffer overflow in math).
That said, I'm a big fan of computer formalization also for the purpose of error checking. But I don't live in fear.
(e.g. there's no buffer overflow in math).
There is, but then I take a nap. :upside_down:
John Baez said:
(e.g. there's no buffer overflow in math).
There is, but then I take a nap. :upside_down:
Mmmh I think I don't know English well enough to get this pun :sweat_smile:
Or math
Or you
It wasn't a pun. I just mean that when I'm doing math and my brain gets overloaded and things stop making sense, I take a nap... and when I wake up, things are usually better.
Oh well, that's the mathematician though! :laughing:
Right, but math is just mathematicians doing stuff. Not to get all philosophical....
As a teen I read about Poincare's remark about mathematician's being unable to view the source of their mathematical inspiration and thoughts. I learned yoga fifty years ago to learn how to view my mind's activities. This allows me to be inspired and to keep working over the decades when most people last a few years in the research of tetration. I see meta-cognition, generalization, and abstraction as vital to mathematical growth. I'm a mathematical hedonist, I do math because it feels wonderful.
Matteo Capucci said:
That said, I'm a big fan of computer formalization also for the purpose of error checking. But I don't live in fear.
That's great. But Voevodsky did, and that prompted him to eventually develop HoTT. Fear can be a useful thing.
The thing that I find unsatisfying about John's opinion here is that it was exactly the same opinion people used to have about code: "There's plenty of code that's not gonna kill anyone if there's a bug, so why are people so obsessed with debuggers anyway?"
I don't think that's productive. It's because people were obsessed with writing better tools for programmers to write better programs that programming has become a lot easier than before. And as it turns out, in our highly networked cloud computing age, a bug can mean a security vulnerability, or the difference between profit and loss.
John Baez said:
It's sort of weird that people get freaked out by mistakes in proofs.
Yeah, it's really weird how some people are freaking out about the allegations of mistakes in this proof.
I've been waiting thirty years to get my research reviewed. Last chance for a human to review my work because within five to ten years computers will be reviewing people's mathematics.
Daniel Geisler said:
I've been waiting thirty years to get my research reviewed. Last chance for a human to review my work because within five to ten years computers will be reviewing people's mathematics.
Five to ten years would be awesome. I'm not too optimistic about that, since there are a lot of issues, both technological and organisational, to be addressed before that can happen.
sarahzrf said:
memorizing algorithms does not make you any better at programming than it does at math
I deliberately said "internalise" rather than "memorise", although to be fair the latter would have been almost as applicable in my high school maths example. I mean we get trained to solve a specific form of problem very efficiently, without any of the creativity being directed into reformulating or expanding the question, which is the form of constraint that my software dev friends seem to deal with (albeit with a much larger solution space to work in), whereas half of the work that I do goes into working out which questions to ask. In any case I'm going to stop starting sentences with "in computer science"; I am not a computer scientist, and in spite of having such friends am not qualified to makes such comments. :sweat_smile:
There's something called "plumbing" in the work of programming that's figuring out how to fit different systems together. It's dealing with DLL hell, and what passes for build systems (I'm looking at you, maven and sbt), and translating between different message formats, and knowing about the difference in access times between a register and the cache and main memory and disk and network, and being able to adapt old build scripts to new systems, and all of that stuff that mathematicians (for the most part) have to idealize away in order to keep the complexity low enough that they can reason about it.
There are also mathematicians who want to make the programming languages they have to use as much like the math they enjoy as possible. Unfortunately, sometimes the compilers and the execution model aren't up to it. Scala written like Haskell performs terribly and tends to find the problems in the type system (I'm looking at you, internal classes).
So there are those programmers whose experience with math has been that it's simply too idealized to be useful in their job and, when used without a deep understanding of the abstractions layered on top of the hardware, often leads to very slow code.
theres just, a lot of things
some of them legitimate, some not
and they all tend to get blended together ;-;
Probably also that we mathematicians can be downright insufferable :stuck_out_tongue:
@Rongmin Lu One crucial difference between mathematics and code is that code that is conceptually correct but contains technical errors can create big problems in the real world as soon as it is put into production; whereas mathematics that is conceptually correct but contains technical errors needs to be fixed eventually, but as long as it is eventually fixed no lasting damage is done (e.g. it doesn't create security holes through which malicious actors can hack into mathematics).
I wrote a fairly extensive MO answer about mistakes in mathematics a bit ago.
Mike Shulman said:
Rongmin Lu One crucial difference between mathematics and code is that code that is conceptually correct but contains technical errors can create big problems in the real world as soon as it is put into production; whereas mathematics that is conceptually correct but contains technical errors needs to be fixed eventually, but as long as it is eventually fixed no lasting damage is done (e.g. it doesn't create security holes through which malicious actors can hack into mathematics).
The problem is with "eventually". The long-running saga that started around the same time as the HoTT special year may have done lasting damage to pure mathematics, precisely because "eventually" took too long. I think there's been damage done because the most recent instalment of that saga led to people talking about "socially constructed mathematics" again. It's not the first time someone has brought up that argument, and it probably won't be the last. That IS damage.
People who write code also care about it even when it's not put into production: see the open source software movement. Github provides a reasonably nice model of how to build a system that provides timely feedback and allows others to fix technical errors in a visible manner. The problems that Voevodsky had described in the IAS article concerning his 1991 and 1993 papers could easily have been fixed if the community had had a Github equivalent to report and fix errors in mathematics.
What "saga" are you referring to?
Rongmin Lu said:
People who write code also care about it even when it's not put into production: see the open source software movement.
Of course, by "in production" I meant simply "being used". Open source software certainly is that.
The problems that Voevodsky had described in the IAS article concerning his 1991 and 1993 papers could easily have been fixed if the community had had a Github equivalent to report and fix errors in mathematics.
That's a bold claim. I doubt it. What evidence do you offer?
Mike Shulman said:
What "saga" are you referring to?
I assumed he meant the ABC conjecture https://www.math.columbia.edu/~woit/wordpress/?p=11709
@Rongmin Lu
u might find this funny, but i was kinda assuming for a while that you weren't a programmer based on how positive your characterization of the programming world is :upside_down:
like, an outsider's idealism
in some ways at least
sarahzrf said:
u might find this funny, but i was kinda assuming for a while that you weren't a programmer based on how positive your characterization of the programming world is :upside_down:
I'm very aware of all the downsides of the programming world. However, as I had said earlier, I was merely pointing all the things that are available that can make a programmer's life much easier, things that mathematics currently does not have.
The fact that large swathes of the programming world may not make much use of these tools is testament to Gibson's observation that "the future is here, it's just not evenly distributed".
Mike Shulman said:
Of course, by "in production" I meant simply "being used". Open source software certainly is that.
By "in production", programmers usually mean code that's being deployed to be used actively in a business. There are plenty of Github repos that are not "being used" in that sense of being "in production". Some repos even put disclaimers discouraging you from using them because they're not production-ready yet.
@Rongmin Lu Well, not being a programmer, I apologize for using the jargon wrong. But I thought the meaning would be clear from the point I was making, since using code in the real world is obviously sufficient for bugs to cause real problems, whether or not it is "deployed" (whatever that means in your jargon) to be used actively in a "business" (however that word is defined).
Mike Shulman said:
Well, not being a programmer, I apologize for using the jargon wrong. But I thought the meaning would be clear from the point I was making, since using code in the real world is obviously sufficient for bugs to cause real problems, whether or not it is "deployed" (whatever that means in your jargon) to be used actively in a "business" (however that word is defined).
Sure, but my contention was that not all open source software is being used actively in the real world. There are lots of reasons why people publish open source software on Github, and one of them is to invite feedback because the code isn't ready to be used in the real world. Lots of code on Github is just research code or personal projects. It's still open source software.
Mike Shulman said:
Rongmin Lu said:
The problems that Voevodsky had described in the IAS article concerning his 1991 and 1993 papers could easily have been fixed if the community had had a Github equivalent to report and fix errors in mathematics.
That's a bold claim. I doubt it. What evidence do you offer?
Since you have experience using Github for the HoTT book, perhaps you could tell us how difficult it was to use Github to report and fix errors during the project?
Perhaps you can tell us how Github still creates the kind of problems that Voevodsky had identified in his IAS article:
My claim is a counterfactual. You have actual experience using Github for a real-life project. What evidence do you offer to counter my argument?