Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: community: general

Topic: CS and Math(s)


view this post on Zulip sarahzrf (Apr 04 2020 at 03:09):

Rongmin Lu said:

sarahzrf said:

Rongmin Lu said:

Anecdotally, many programmers I know hate mathematics. For some reason, the way of thinking that we take for granted is very alien and uncomfortable to them, whereas the way of thinking that pays off in programming comes more naturally for them.

as a programmer, can second this anecdote, there's a lot to unpack

Please unpack, the floor is yours.

god im not sure where to start

view this post on Zulip sarahzrf (Apr 04 2020 at 03:10):

i mean, well, to begin with, i personally think that programming shares a ton cognitively with math, so im not entirely on board with the statement about ways of thinking

view this post on Zulip sarahzrf (Apr 04 2020 at 03:12):

but there's definitely something to it, it's just nuanced

view this post on Zulip sarahzrf (Apr 04 2020 at 03:12):

i almost wanna say that some of the differences are aesthetic, but that's not quite what i mean

view this post on Zulip sarahzrf (Apr 04 2020 at 03:14):

maybe i mean like... a lot of the thinking you do in programming is the same kind of thinking you do in math, the thing that is alien to many programmers is the way of seeing what it is that you're doing, maybe?

view this post on Zulip sarahzrf (Apr 04 2020 at 03:14):

anyway that's only part of it

view this post on Zulip sarahzrf (Apr 04 2020 at 03:15):

that entirely aside, there's also a certain amount of, like, blind hostility toward math among many programmers, as you said in less colorful language

view this post on Zulip Daniel Geisler (Apr 04 2020 at 03:16):

So thinking is a cospan of mathematics and computer science?

view this post on Zulip sarahzrf (Apr 04 2020 at 03:16):

:)

view this post on Zulip sarahzrf (Apr 04 2020 at 03:17):

im tempted to say that some of that blind hostility is a symptom of a bigger pattern of sort of like

view this post on Zulip Dan Doel (Apr 04 2020 at 03:17):

There's a lot of hostility toward academic investigation of programming among programmers.

view this post on Zulip sarahzrf (Apr 04 2020 at 03:17):

yeah

view this post on Zulip sarahzrf (Apr 04 2020 at 03:17):

maybe not quite anti-intellectualism, but anti-academicism sounds right

view this post on Zulip sarahzrf (Apr 04 2020 at 03:18):

not all of it is unfounded, but

view this post on Zulip sarahzrf (Apr 04 2020 at 03:20):

there's this trope among many programmers of Everything Getting Invented By Hackers Outside Of The System, roughly

view this post on Zulip sarahzrf (Apr 04 2020 at 03:20):

"hackers" in the traditional sense, not as in "cracker", of course :upside_down::upside_down::upside_down:

view this post on Zulip sarahzrf (Apr 04 2020 at 03:21):

you also have the fact that many programmers are autodidacts

view this post on Zulip sarahzrf (Apr 04 2020 at 03:21):

which synergizes with that trope tbh

view this post on Zulip Dan Doel (Apr 04 2020 at 03:22):

Right. That's probably a big part.

view this post on Zulip Dan Doel (Apr 04 2020 at 03:22):

Also just the general artificial divide, which results in some of the opposite phenomenon, too.

view this post on Zulip sarahzrf (Apr 04 2020 at 03:40):

i guess broadly there are a lot of programmers who have overly-universally-applied skepticism of "intellectual authority"

view this post on Zulip Dan Doel (Apr 04 2020 at 03:42):

Well, they're all the smartest person ever to live.

view this post on Zulip sarahzrf (Apr 04 2020 at 04:03):

100% true

view this post on Zulip (=_=) (Apr 04 2020 at 05:29):

sarahzrf said:

i mean, well, to begin with, i personally think that programming shares a ton cognitively with math, so im not entirely on board with the statement about ways of thinking

Well, duh, programming is just applied maths, amirite? :smiling_devil:

view this post on Zulip (=_=) (Apr 04 2020 at 05:31):

sarahzrf said:

maybe i mean like... a lot of the thinking you do in programming is the same kind of thinking you do in math, the thing that is alien to many programmers is the way of seeing what it is that you're doing, maybe?

I'd put it this way: you can compile programs, but you can't compile proofs (yet).

view this post on Zulip sarahzrf (Apr 04 2020 at 05:31):

yes you can

view this post on Zulip (=_=) (Apr 04 2020 at 05:31):

Not in the way most proofs are written today.

view this post on Zulip sarahzrf (Apr 04 2020 at 05:32):

actually fuck youre right i cant compile my proofs :cry:
image.png

view this post on Zulip (=_=) (Apr 04 2020 at 05:32):

OTOH, programs have to be compiled, and they have been since time immemorial.

view this post on Zulip (=_=) (Apr 04 2020 at 05:33):

That's the difference. When programmers push programs to production, there's a defined QA process.

view this post on Zulip (=_=) (Apr 04 2020 at 05:33):

When mathematics push proofs to journals... they just keep pushing until one of them gives.

view this post on Zulip (=_=) (Apr 04 2020 at 05:34):

You know how the "peer review" process works, I hope? How does that compare with the code review process?

view this post on Zulip sarahzrf (Apr 04 2020 at 05:34):

if programs have to be compiled then explain this image.png

view this post on Zulip (=_=) (Apr 04 2020 at 05:35):

There's probably a reason why scripts-as-a-service hasn't taken off yet.

view this post on Zulip sarahzrf (Apr 04 2020 at 05:36):

https://news.ycombinator.com/item?id=1982489

view this post on Zulip sarahzrf (Apr 04 2020 at 05:36):

sounds like taking off to me

view this post on Zulip sarahzrf (Apr 04 2020 at 05:36):

hmm, i'm just being flippant :)

view this post on Zulip (=_=) (Apr 04 2020 at 05:36):

Alright, let's be pedantic: programs have to be processed into workable machine code. Whether you call this interpreting or compiling is just semantics.

view this post on Zulip sarahzrf (Apr 04 2020 at 05:37):

/me takes a deep breath

view this post on Zulip sarahzrf (Apr 04 2020 at 05:37):

FIRST of all

view this post on Zulip sarahzrf (Apr 04 2020 at 05:37):

no, but seriously

view this post on Zulip sarahzrf (Apr 04 2020 at 05:37):

peano arithmetic can trivially be compiled to itself, but it cannot interpret itself

view this post on Zulip sarahzrf (Apr 04 2020 at 05:38):

an interpreter is something that performs an act of dequotation

view this post on Zulip (=_=) (Apr 04 2020 at 05:38):

Nobody writes proofs in Peano arithmetic. It's like the machine code of mathematics.

view this post on Zulip sarahzrf (Apr 04 2020 at 05:38):

a compiler merely transforms syntax

view this post on Zulip sarahzrf (Apr 04 2020 at 05:38):

okay, then ZFC can trivially be compiled to itself, but it cannot interpret itself

view this post on Zulip (=_=) (Apr 04 2020 at 05:38):

The point being: when code breaks, we will (eventually) know.

view this post on Zulip (=_=) (Apr 04 2020 at 05:39):

When proofs break, nobody knows, until and unless somebody actually sits down and reviews the proofs.

view this post on Zulip Daniel Geisler (Apr 04 2020 at 05:39):

Have you folks seen MetaMath from Princeton?

view this post on Zulip sarahzrf (Apr 04 2020 at 05:39):

distantly

view this post on Zulip sarahzrf (Apr 04 2020 at 05:40):

anyway i do not disagree with you on the original point you wanted to make about programming dealing with a fundamentally rigid being

view this post on Zulip sarahzrf (Apr 04 2020 at 05:40):

but i think you've gone off the rails :thinking:

view this post on Zulip Daniel Geisler (Apr 04 2020 at 05:40):

It compiles math. 2+2=4 is around theorem 2700. :slight_smile:

view this post on Zulip sarahzrf (Apr 04 2020 at 05:41):

yikes

view this post on Zulip sarahzrf (Apr 04 2020 at 05:41):

you can prove 2+2=4 in coq from absolute scratch in probably 10 lines

view this post on Zulip sarahzrf (Apr 04 2020 at 05:41):

most of which are defining equality and natural numbers and addition

view this post on Zulip (=_=) (Apr 04 2020 at 05:41):

Daniel Geisler said:

It compiles math. 2+2=4 is around theorem 2700. :)

Remember when I said something about several hundred pages of Principia Mathematica being needed to prove 1+1=2? Yeah...

view this post on Zulip (=_=) (Apr 04 2020 at 05:42):

sarahzrf said:

you can prove 2+2=4 in coq from absolute scratch in probably 10 lines

That's an improvement.

view this post on Zulip (=_=) (Apr 04 2020 at 05:42):

Anyway, my point was: programmers operate on a more concrete level than pure mathematicians, even though they are both "just" doing the same thing.

view this post on Zulip sarahzrf (Apr 04 2020 at 05:42):

the actual proof of 2+2=4 is gonna be like

Definition two_four : 2 + 2 = 4 := eq_refl.

view this post on Zulip sarahzrf (Apr 04 2020 at 05:43):

i never said they were "just" doing the same thing

view this post on Zulip (=_=) (Apr 04 2020 at 05:43):

sarahzrf said:

i mean, well, to begin with, i personally think that programming shares a ton cognitively with math, so im not entirely on board with the statement about ways of thinking

All right, you said this.

view this post on Zulip Daniel Geisler (Apr 04 2020 at 05:43):

It's fascinating seeing the theorems that are chosen over similar theorems. They tweak math.

view this post on Zulip (=_=) (Apr 04 2020 at 05:44):

sarahzrf said:

maybe i mean like... a lot of the thinking you do in programming is the same kind of thinking you do in math, the thing that is alien to many programmers is the way of seeing what it is that you're doing, maybe?

And this.

view this post on Zulip sarahzrf (Apr 04 2020 at 05:44):

well, you're not putting it to the same end, even if you're thinking in many of the same ways

view this post on Zulip sarahzrf (Apr 04 2020 at 05:45):

but look

view this post on Zulip (=_=) (Apr 04 2020 at 05:45):

Exactly, and that's kind of my point. The end is more concrete for programmers, and more abstract for mathematicians.

view this post on Zulip sarahzrf (Apr 04 2020 at 05:45):

ah, sure :)

view this post on Zulip (=_=) (Apr 04 2020 at 05:45):

I'm glad we can agree on this.

view this post on Zulip (=_=) (Apr 04 2020 at 05:46):

I don't know about the coend though.

view this post on Zulip sarahzrf (Apr 04 2020 at 05:46):

the coend is for programming

view this post on Zulip sarahzrf (Apr 04 2020 at 05:46):

that's why it shows up in optics

view this post on Zulip (=_=) (Apr 04 2020 at 05:47):

Funny how I had a hard time explaining to some programmers what a coend is then...

view this post on Zulip Daniel Geisler (Apr 04 2020 at 05:47):

I'm an autistic yogi who studies meta-cognition and has intense visualization synthesia that might mean I'm a savant. I've also done large amounts of physics, math, programming and psychology. Ask me anything. :slight_smile:

view this post on Zulip (=_=) (Apr 04 2020 at 05:54):

And the point I was trying to make is what drove Voevodsky to develop HoTT. He was terrified to find out that there was a flaw in his work and nobody had discovered the error, even though plenty of people had apparently read it.

Source: https://www.ias.edu/ideas/2014/voevodsky-origins

view this post on Zulip sarahzrf (Apr 04 2020 at 05:55):

here is my question: can metamath do this image.png

view this post on Zulip T Murrills (Apr 04 2020 at 05:56):

you (the metamath) just did :P

view this post on Zulip sarahzrf (Apr 04 2020 at 05:56):

:O

view this post on Zulip T Murrills (Apr 04 2020 at 05:57):

\is about to go to sleep\ \starts reading the shaky foundations topic\ hmm. somehow this is not helping me sleep.

view this post on Zulip sarahzrf (Apr 04 2020 at 05:57):

i apologize for the forall _ : A, stuff, coq's ordinary A -> B notation is actually defined in the base library as syntactic sugar for that

view this post on Zulip T Murrills (Apr 04 2020 at 06:00):

yes thank you!! being shot downwards into my bed by Samuel L. Jackson’s authoritative voice is just the ticket :)

view this post on Zulip Daniel Geisler (Apr 04 2020 at 06:00):

Metamath's strength is the system is a small program that load all the basic rules of math. I believe in a 80%, 20%. If one technology is at 80% and another technology is at 20%. In that case I learn and use both. Cog is awesome but it compliments MetaMath.

view this post on Zulip sarahzrf (Apr 04 2020 at 06:01):

MethMath :flushed:

view this post on Zulip sarahzrf (Apr 04 2020 at 06:01):

is that what erdos did

view this post on Zulip (=_=) (Apr 04 2020 at 06:01):

sarahzrf said:

is that what erdos did

Pretty much, yeah

view this post on Zulip Daniel Geisler (Apr 04 2020 at 06:03):

Shouldn't write with Sisters of Mercy cranking.

view this post on Zulip Daniel Geisler (Apr 04 2020 at 06:07):

Maybe we should change the Zilup turtles to pointy-headed managers. I think it is pointy-headed managers all the way down.

view this post on Zulip (=_=) (Apr 04 2020 at 06:08):

Daniel Geisler said:

Metamath's strength is the system is a small program that load all the basic rules of math. I believe in a 80%, 20%. If one technology is at 80% and another technology is at 20%. In that case I learn and use both. Cog is awesome but it compliments MetaMath.

I think what's lacking with these systems is that it requires people to learn a new syntax. This is something that working mathematicians are not prepared to do at the moment. There's a need to couple systems like this with a natural-language-processing module.

view this post on Zulip (=_=) (Apr 04 2020 at 06:09):

Daniel Geisler said:

Maybe we should change the Zilup turtles to pointy-headed managers. I think it is pointy-headed managers all the way down.

Great idea. I seem to recall someone has written a paper on using CT to model PERT charts, but I can't find it at the moment.

view this post on Zulip Alexis Hazell (Apr 04 2020 at 06:09):

Rongmin Lu said:

Daniel Geisler said:

It compiles math. 2+2=4 is around theorem 2700. :)

Remember when I said something about several hundred pages of Principia Mathematica being needed to prove 1+1=2? Yeah...

Though to be fair: "One of the reasons that the proof of 2 + 2 = 4 is so long is that 2 and 4 are complex numbers—i.e. we are really proving (2+0i) + (2+0i) = (4+0i)—and these have a complicated construction (see the Axioms for Complex Numbers) but provide the most flexibility for the arithmetic in our set.mm database." (http://us.metamath.org/mpeuni/mmset.html#trivia)

view this post on Zulip (=_=) (Apr 04 2020 at 06:14):

Alexis Hazell said:

Rongmin Lu said:

Daniel Geisler said:

It compiles math. 2+2=4 is around theorem 2700. :)

Remember when I said something about several hundred pages of Principia Mathematica being needed to prove 1+1=2? Yeah...

Though to be fair: "One of the reasons that the proof of 2 + 2 = 4 is so long is that 2 and 4 are complex numbers—i.e. we are really proving (2+0i) + (2+0i) = (4+0i)—and these have a complicated construction (see the Axioms for Complex Numbers) but provide the most flexibility for the arithmetic in our set.mm database." (http://us.metamath.org/mpeuni/mmset.html#trivia)

This is the point at which many working mathematicians will throw their hands up and say: "Formalise maths? Over my dead body!"

view this post on Zulip Daniel Geisler (Apr 04 2020 at 06:14):

2+2=4 MetaMath page http://us.metamath.org/mpeuni/mmset.html#trivia
Proof is 184 levels at it's deepest. There will be a test.

view this post on Zulip (=_=) (Apr 04 2020 at 06:14):

Daniel Geisler said:

2+2=4 MetaMath page http://us.metamath.org/mpeuni/mmset.html#trivia
Proof is 184 levels at it's deepest. There will be a test.

Yeah nah, I'll pass.

view this post on Zulip (=_=) (Apr 04 2020 at 06:15):

And that's the problem.

view this post on Zulip Daniel Geisler (Apr 04 2020 at 06:20):

MetaMath is a bastion of set logic with it's 27000 proofs. My rhetorical question is wouldn't a category version result in a significantly terser system?

view this post on Zulip (=_=) (Apr 04 2020 at 06:21):

Rongmin Lu said:

Daniel Geisler said:

Maybe we should change the Zilup turtles to pointy-headed managers. I think it is pointy-headed managers all the way down.

Great idea. I seem to recall someone has written a paper on using CT to model PERT charts, but I can't find it at the moment.

Ah, found it. It was a post by Simon Willerton on the n-Category Cafe, "Project Scheduling and Copresheaves". Somehow this never made it into a paper by him, although someone reached out to him about it last year.

view this post on Zulip (=_=) (Apr 04 2020 at 06:23):

Daniel Geisler said:

MetaMath is a bastion of set logic with it's 27000 proofs. My rhetorical question is wouldn't a category version result in a significantly terser system?

It depends on what you mean by "terse". As sarahzrf has pointed out above, Coq seems to be terser. But I think you still need a database of results. 27000 proofs are probably not enough.

view this post on Zulip sarahzrf (Apr 04 2020 at 06:33):

i dont know that coq is terser

view this post on Zulip sarahzrf (Apr 04 2020 at 06:33):

it's a richer system

view this post on Zulip Daniel Geisler (Apr 04 2020 at 06:36):

I suspect that Cog and CT have a larger "operating system". What would be interesting in knowing which could utilize resources more effectively.

view this post on Zulip John Baez (Apr 04 2020 at 06:49):

Rongmin wrote:

He was terrified to find out that there was a flaw in his work and nobody had discovered the error...

Nobody bothered to find the specific error, not even Simpson (who wrote a paper showing Voevodksy's result was wrong), but nobody believed the result was correct, which is why nobody used it. If it had been right it would have been so wonderful everyone would have been using it a lot!

Really it was Voevodsky's job to find the mistake in his proof, given that nobody else believed it.

But if this led him to invent HoTT, I guess it all worked out for the best!

view this post on Zulip (=_=) (Apr 04 2020 at 07:02):

John Baez said:

Nobody bothered to find the specific error, but lots of us knew the result was wrong, which is why nobody used it. If it had been right it would have been so important everyone would have been using it a lot!

Really it was Voevodsky's job to find the mistake in his proof, given that nobody else believed it.

But if this led him to invent HoTT, I guess it all worked out for the best!

See, that's kind of the problem. "Nobody bothered".

That's what I was trying to point out to sarahzrf to be the difference between programmers and mathematicians: the former tends to have a better feedback loop than the latter.

view this post on Zulip (=_=) (Apr 04 2020 at 07:03):

In an environment with a good workflow, code is scrutinised before being put into production:

And in today's cloud computing world, a bug is a security issue that can bring down your business, so QA is very important.

view this post on Zulip (=_=) (Apr 04 2020 at 07:04):

Compare this to the maths world. "Really it was Voevodsky's job to find the mistake in his proof"? This, for him, was an unknown unknown. How was he to know, if nobody would tell him, because "nobody bothered"?

view this post on Zulip Daniel Geisler (Apr 04 2020 at 07:05):

The first moon landing had a serious bug found right before liftoff, a swapped sign resulted in negative gravity.

view this post on Zulip (=_=) (Apr 04 2020 at 07:05):

Now imagine if "nobody bothered".

view this post on Zulip (=_=) (Apr 04 2020 at 07:09):

From the article Voevodsky wrote:

This story got me scared. Starting from 1993, multiple groups of mathematicians studied my paper at seminars and used it in their work and none of them noticed the mistake. And it clearly was not an accident. A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.

view this post on Zulip (=_=) (Apr 04 2020 at 07:10):

There's the deference to authority --- or as John Baez suggested, "nobody bothered" --- that eventuated. "Nobody bothered" because "surely" someone of Voevodsky's calibre and status would have already figured out that there was a problem.

view this post on Zulip Daniel Geisler (Apr 04 2020 at 07:11):

There is a certain objectivity about programmers that few mathematicians had. When I walk into a new programming site the reality of who I am is sorted out in a matter of a day, maybe a few days at the most. Now who am I as a mathematician? That has many more degrees of freedom.

view this post on Zulip (=_=) (Apr 04 2020 at 07:12):

Daniel Geisler said:

That has many more degrees of freedom.

It does also mean that mathematicians enjoy more freedom, in that they're freer of scrutiny. I think that's perhaps why many are hostile to the formalisation of maths, amongst many other reasons.

view this post on Zulip Daniel Geisler (Apr 04 2020 at 07:22):

A mathematician is required to live a life of continual abstraction, generalization and simplification.

view this post on Zulip (=_=) (Apr 04 2020 at 07:24):

Daniel Geisler said:

A mathematician is required to live a life of continual abstraction, generalization and simplification.

A pure mathematician, maybe. And this is a bit of a caricature as well, one that the culture seems happy to perpetuate.

view this post on Zulip Daniel Geisler (Apr 04 2020 at 07:29):

At the best, there should be no dichotomy between math, physics and computer science. Is computer science a cospan of math and physics?

view this post on Zulip sarahzrf (Apr 04 2020 at 07:30):

no

view this post on Zulip sarahzrf (Apr 04 2020 at 07:30):

computer science doesnt have that much in common with physics as far as im concerned

view this post on Zulip sarahzrf (Apr 04 2020 at 07:30):

Rongmin Lu said:

There's the deference to authority --- or as John Baez suggested, "nobody bothered" --- that eventuated. "Nobody bothered" because "surely" someone of Voevodsky's calibre and status would have already figured out that there was a problem.

he was suggesting the opposite :thinking:

view this post on Zulip sarahzrf (Apr 04 2020 at 07:31):

he said that nobody bothered finding the issue because they knew it was wrong so why care the specific reason

view this post on Zulip (=_=) (Apr 04 2020 at 07:31):

sarahzrf said:

he said that nobody bothered finding the issue because they knew it was wrong so why care the specific reason

Then he goes on to blame Voevodsky for not finding it out.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:31):

:)

view this post on Zulip (=_=) (Apr 04 2020 at 07:32):

John Baez said:

Really it was Voevodsky's job to find the mistake in his proof, given that nobody else believed it.

That's what he wrote, sarahzrf.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:33):

that means something different from what you're saying

view this post on Zulip (=_=) (Apr 04 2020 at 07:33):

Please explain?

view this post on Zulip sarahzrf (Apr 04 2020 at 07:34):

you're talking about an attitude that it should be voevodsky's job to make sure there arent any mistakes, so that other people dont end up thinking it's right because they missed the mistake

view this post on Zulip sarahzrf (Apr 04 2020 at 07:34):

but he's talking about it already not being seen as right, and it being his job to work out the issue

view this post on Zulip (=_=) (Apr 04 2020 at 07:35):

sarahzrf said:

but he's talking about it already not being seen as right, and it being his job to work out the issue

The thing was, according to Voevodsky, he spent many years not knowing that it wasn't right, and nobody told him.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:36):

well, take it up with john, not me :shrug:

view this post on Zulip (=_=) (Apr 04 2020 at 07:36):

Rongmin Lu said:

This story got me scared. Starting from 1993, multiple groups of mathematicians studied my paper at seminars and used it in their work and none of them noticed the mistake. And it clearly was not an accident. A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.

As I quoted above.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:36):

apparently he has a different impression

view this post on Zulip Verity Scheel (Apr 04 2020 at 07:36):

programmers almost always work with tangibles: almost every type/set they work with is either finite or countably infinite, and usually they can serialize it to a nice syntax or to binary; all the functions they work with are computable, sort of by definition; and of course there’s a constantly dialogue with the computer, not only correctness checking, but computationally – so if you don’t know how something behaves, you can literally run it on different inputs and see how it works.

of course the tradeoff is that sometimes programmers work with things that are dynamic, because they are mutable or have hidden state etc., so that ruins a lot of the nice properties: suddenly it’s a lot harder to serialize it, to probe it, to make sense of it, etc.

but I use functional programming so I rarely have to worry about that :wink:

view this post on Zulip (=_=) (Apr 04 2020 at 07:36):

sarahzrf said:

well, take it up with john, not me :shrug:

I'm taking it up with him.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:36):

almost every type/set they work with is either finite or countably infinite, and usually they can serialize it to a nice syntax or to binary

very false :scream:

view this post on Zulip (=_=) (Apr 04 2020 at 07:36):

But you just joined the convo.

view this post on Zulip (=_=) (Apr 04 2020 at 07:37):

Nicholas Scheel said:

programmers almost always work with tangibles: almost every type/set they work with is either finite or countably infinite, and usually they can serialize it to a nice syntax or to binary; all the functions they work with are computable, sort of by definition; and of course there’s a constantly dialogue with the computer, not only correctness checking, but computationally – so if you don’t know how something behaves, you can literally run it on different inputs and see how it works.

Yup, you've just described how the feedback loop is faster for programmers than for mathematicians.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:37):

but the stuff about there being pushback & a dialogue is extremely on point

view this post on Zulip sarahzrf (Apr 04 2020 at 07:38):

i meant to react earlier when rongmin said something about feedback loops but i got distracted

view this post on Zulip sarahzrf (Apr 04 2020 at 07:38):

im pretty sure the feedback loop is why im able to hyperfocus on programming semireliably when it's a project im particularly interested in, whereas im rarely able to hyperfocus on math even when im very interested in it (actually im not sure how often i do but definitely less than programming)

view this post on Zulip Verity Scheel (Apr 04 2020 at 07:39):

@sarahzrf care to explain your reaction re the sizes?

view this post on Zulip sarahzrf (Apr 04 2020 at 07:39):

:upside_down:

view this post on Zulip sarahzrf (Apr 04 2020 at 07:39):

Integer -> Integer is uncountable, for example

view this post on Zulip sarahzrf (Apr 04 2020 at 07:39):

and not very serializable

view this post on Zulip Verity Scheel (Apr 04 2020 at 07:40):

but programming functions are computable, so I think it’s actually countable again ... meta-theoretically countable, or pseudo countable, or something

view this post on Zulip Verity Scheel (Apr 04 2020 at 07:41):

also Dhall has function serialization, but I digress

view this post on Zulip sarahzrf (Apr 04 2020 at 07:41):

everything can be meta-theoretically countable

view this post on Zulip sarahzrf (Apr 04 2020 at 07:41):

skolem strikes again

view this post on Zulip (=_=) (Apr 04 2020 at 07:41):

Size issues are really not the problem here. Habits and ways of doing things are.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:43):

actually, here is a nice puzzle for you

view this post on Zulip sarahzrf (Apr 04 2020 at 07:43):

since you're so convinced that this type is actually countable

view this post on Zulip Verity Scheel (Apr 04 2020 at 07:43):

well, part of the reason I brought up size, is that (excluding functions) most data that programmers work with has serialization and even decidable equality, and that ties back into the computational aspect

view this post on Zulip sarahzrf (Apr 04 2020 at 07:43):

let's take, i dunno, MLTT w/ natural numbers and equality, or something

view this post on Zulip sarahzrf (Apr 04 2020 at 07:44):

and add a constant code : (nat -> nat) -> nat to it

view this post on Zulip sarahzrf (Apr 04 2020 at 07:45):

and some rules so that an application of this constant to a value in an empty context reduces to the gödel number of the term it is applied to

view this post on Zulip Verity Scheel (Apr 04 2020 at 07:46):

like imagine a mathematician just asking their computer what the genus of an arbitrary manifold is – that could never happen, the set of manifolds is way too huge to get computational results from it ...

view this post on Zulip sarahzrf (Apr 04 2020 at 07:47):

then a constant code_spec : forall f g : nat -> nat, code n = code m -> f = g

view this post on Zulip sarahzrf (Apr 04 2020 at 07:47):

with computational behavior that... yadda yadda, this stuff probably cant be made to work particularly sanely in practice but there's a moral i want to get at

view this post on Zulip (=_=) (Apr 04 2020 at 07:47):

Nicholas Scheel said:

like imagine a mathematician just asking their computer what the genus of an arbitrary manifold is – that could never happen, the set of manifolds is way too huge to get computational results from it ...

Yeah, well, if you ask a human what the genus of an arbitrary manifold is, and don't give any information, you won't get anything either.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:48):

now: it is provable in mltt, by the standard cantor argument, that there is no injection (nat -> nat) -> nat

view this post on Zulip sarahzrf (Apr 04 2020 at 07:48):

but we've just added one to the system

view this post on Zulip sarahzrf (Apr 04 2020 at 07:48):

so if we apply the proof to it, we get a proof of false

view this post on Zulip T Murrills (Apr 04 2020 at 07:48):

sigh I’m still up

view this post on Zulip sarahzrf (Apr 04 2020 at 07:48):

but we gave computational behavior to all of the pieces

view this post on Zulip sarahzrf (Apr 04 2020 at 07:48):

so what gives?

view this post on Zulip sarahzrf (Apr 04 2020 at 07:48):

what happens when we beta-reduce the proof of false?

view this post on Zulip sarahzrf (Apr 04 2020 at 07:49):

riddle me this, if u think nat -> nat is so countable in a programming language

view this post on Zulip T Murrills (Apr 04 2020 at 07:50):

anyway I think @Nicholas Scheel nonetheless has a point: mathematical terms are thought of, in general, as referring to abstract objects that could not in principle be stored in a computer.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:50):

youd be suprise

view this post on Zulip Verity Scheel (Apr 04 2020 at 07:50):

actually, when you compile a program, the number of functions is finite :troll:

view this post on Zulip (=_=) (Apr 04 2020 at 07:51):

Nicholas Scheel said:

of course the tradeoff is that sometimes programmers work with things that are dynamic, because they are mutable or have hidden state etc., so that ruins a lot of the nice properties: suddenly it’s a lot harder to serialize it, to probe it, to make sense of it, etc.

That's another thing I should also point out. Pure mathematicians, at least, work with things that are mostly static. CT is most definitely static: you may be able to deal with dynamic objects, but only with great difficulty.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:52):

Nicholas Scheel said:

actually, when you compile a program, the number of functions is finite :troll:

defunctionalization time

view this post on Zulip (=_=) (Apr 04 2020 at 07:52):

T Murrills said:

anyway I think Nicholas Scheel nonetheless has a point: mathematical terms are thought of, in general, as referring to abstract objects that could not in principle be stored in a computer.

That is soooo 20th century.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:52):

Rongmin Lu said:

Nicholas Scheel said:

of course the tradeoff is that sometimes programmers work with things that are dynamic, because they are mutable or have hidden state etc., so that ruins a lot of the nice properties: suddenly it’s a lot harder to serialize it, to probe it, to make sense of it, etc.

That's another thing I should also point out. Pure mathematicians, at least, work with things that are mostly static. CT is most definitely static: you may be able to deal with dynamic objects, but only with great difficulty.

brouwer is spinning violently in his grave :weary:

view this post on Zulip sarahzrf (Apr 04 2020 at 07:52):

also lawvere probably

view this post on Zulip sarahzrf (Apr 04 2020 at 07:52):

and he's not even dead yet!

view this post on Zulip (=_=) (Apr 04 2020 at 07:53):

Please explain?

view this post on Zulip T Murrills (Apr 04 2020 at 07:53):

tell me most mathematicians don’t think of a topological space as actually having an uncountable number of points

view this post on Zulip (=_=) (Apr 04 2020 at 07:53):

Especially with Brouwer.

view this post on Zulip (=_=) (Apr 04 2020 at 07:53):

T Murrills said:

tell me most mathematicians don’t think of a topological space as actually having an uncountable number of points

Nope. It's just a blob. Usually a 2-dim one.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:54):

image.png

view this post on Zulip T Murrills (Apr 04 2020 at 07:54):

It’s too late for me to tell if you’re trolling me or not lol

view this post on Zulip (=_=) (Apr 04 2020 at 07:54):

Absolutely trolling. But also true.

view this post on Zulip T Murrills (Apr 04 2020 at 07:54):

because like...you’re not wrong

view this post on Zulip sarahzrf (Apr 04 2020 at 07:54):

synthetic mathematics time

view this post on Zulip (=_=) (Apr 04 2020 at 07:55):

See Jules Hedges tweet here: https://twitter.com/_julesh_/status/1241481443886456832

It's so convenient that every topological space is 2-dimensional and vaguely blob-shaped

- julesh (@_julesh_)

view this post on Zulip (=_=) (Apr 04 2020 at 07:55):

So if it's just a blob, we can store it on Github

view this post on Zulip sarahzrf (Apr 04 2020 at 07:56):

booo

view this post on Zulip sarahzrf (Apr 04 2020 at 07:56):

dont put blobs in git

view this post on Zulip Verity Scheel (Apr 04 2020 at 07:57):

well, you raise an interesting point: programmers expect any arbitrary object (of a certain type) to be storable as data, which basically means its countable; physicists expect an object to actually exist in the universe, somewhere (possibly inside a black hole); and mathematicians almost always work with descriptions of objects (not encodings of objects), and a large part of the trick is relating different descriptions of the same object, which requires proofs of arbitrary complexity, not just computation

view this post on Zulip T Murrills (Apr 04 2020 at 07:57):

like......the thing actually thought of does not have uncountably many points...but it is thought to be something which does? ya feel? waits for sarah to say ‘intension’

view this post on Zulip (=_=) (Apr 04 2020 at 07:58):

sarahzrf said:

image.png

Point taken, but intuitionism leads to constructivism, which leads to programming, which is, as we've discussed above, not exactly mathematics.

view this post on Zulip Verity Scheel (Apr 04 2020 at 07:58):

it’s the classic argument of whether 19374829^163947^16373^16372 actually exists. Mathematicians say yes, I just wrote down a description of that number, and programmers say no, you can’t represent it in binary.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:59):

mathematicians dont all say yes

view this post on Zulip Verity Scheel (Apr 04 2020 at 07:59):

Classical mathematicians

view this post on Zulip sarahzrf (Apr 04 2020 at 07:59):

some of them, like edward nelson, have integrity

view this post on Zulip Verity Scheel (Apr 04 2020 at 07:59):

not finitists, sorry forgot to clarify

view this post on Zulip Verity Scheel (Apr 04 2020 at 07:59):

not talking about Egyptians, either

view this post on Zulip (=_=) (Apr 04 2020 at 07:59):

Nicholas Scheel said:

it’s the classic argument of whether 19374829^163947^16373^16372 actually exists. Mathematicians say yes, I just wrote down a description of that number, and programmers say no, you can’t represent it in binary.

You wrote down a program. If we execute that program with enough resources, it'll give you that number.

view this post on Zulip sarahzrf (Apr 04 2020 at 07:59):

edward nelson was anti-finitism

view this post on Zulip sarahzrf (Apr 04 2020 at 08:00):

how much resource do you need for that, rongmin

view this post on Zulip sarahzrf (Apr 04 2020 at 08:00):

:)

view this post on Zulip T Murrills (Apr 04 2020 at 08:00):

my thoughts lean towards “people who say numbers represented in binary actually exist” are also wrong in a very important sense

view this post on Zulip (=_=) (Apr 04 2020 at 08:00):

sarahzrf said:

how much resource do you need for that, rongmin

A lot. Mathematicians just use lazy evaluation extensively.

view this post on Zulip T Murrills (Apr 04 2020 at 08:01):

I N T E N S I O N

view this post on Zulip Verity Scheel (Apr 04 2020 at 08:01):

inb4 “Haskell programs have no side effects bc nobody ever runs them”

view this post on Zulip (=_=) (Apr 04 2020 at 08:01):

I N T E N S I O N I N T E N S I F Y I N G

view this post on Zulip (=_=) (Apr 04 2020 at 08:01):

Nicholas Scheel said:

inb4 “Haskell programs have no side effects bc nobody ever runs them”

Monads

view this post on Zulip Verity Scheel (Apr 04 2020 at 08:01):

https://xkcd.com/1312/

view this post on Zulip sarahzrf (Apr 04 2020 at 08:02):

yall have very little of my time left before i insist that you get some experience with a type-theory-based proof assistant before discussing the relationship between mathematics and programming with me

view this post on Zulip (=_=) (Apr 04 2020 at 08:03):

i've been there. i don't think it informs this debate very much.

view this post on Zulip (=_=) (Apr 04 2020 at 08:04):

mostly because it's not about "mathematics and programming" for me, it's "the cultures of doing mathematics and programming".

view this post on Zulip (=_=) (Apr 04 2020 at 08:05):

which is why i said size issues are not relevant here.

view this post on Zulip T Murrills (Apr 04 2020 at 08:05):

I have that experience but I think it mumbles propagates a map/territory conflation or something

that said I do think it informs more philosophical debates on the differences between “math” and “programming”

view this post on Zulip sarahzrf (Apr 04 2020 at 08:05):

hmm, im mostly whining about some of what im seeing about what is possible to compute with

view this post on Zulip (=_=) (Apr 04 2020 at 08:06):

T Murrills said:

I have that experience but I think it mumbles propagates a map/territory conflation or something

that said I do think it informs more philosophical debates on the differences between “math” and “programming”

Most people conflate the map with the territory because they haven't been there or haven't thought much about the territory.

view this post on Zulip sarahzrf (Apr 04 2020 at 08:06):

Nicholas Scheel said:

well, you raise an interesting point: programmers expect any arbitrary object (of a certain type) to be storable as data, which basically means its countable; physicists expect an object to actually exist in the universe, somewhere (possibly inside a black hole); and mathematicians almost always work with descriptions of objects (not encodings of objects), and a large part of the trick is relating different descriptions of the same object, which requires proofs of arbitrary complexity, not just computation

im looking at u

view this post on Zulip sarahzrf (Apr 04 2020 at 08:07):

the map is not the codomain!!!

view this post on Zulip Verity Scheel (Apr 04 2020 at 08:07):

objects != functions, from the programmer’s perspective

view this post on Zulip T Murrills (Apr 04 2020 at 08:07):

I mean, you can compute about anything, you just can’t necessarily compute anything in the sense of what a mathematician might assume a thing is, sorta, and I think that distinction...is here (idk, [end of sentence])

view this post on Zulip (=_=) (Apr 04 2020 at 08:09):

Nicholas Scheel said:

objects != functions, from the programmer’s perspective

How do you store functions then? Not as "data"?

view this post on Zulip Verity Scheel (Apr 04 2020 at 08:09):

actually, I might want to take that back: Turing machines are countable, right?

view this post on Zulip sarahzrf (Apr 04 2020 at 08:09):

turing machines are not sets

view this post on Zulip sarahzrf (Apr 04 2020 at 08:09):

unless you're using set-theoretic foundations like an animal

view this post on Zulip T Murrills (Apr 04 2020 at 08:10):

the set of Turing machines I assume @Nicholas Scheel means

view this post on Zulip sarahzrf (Apr 04 2020 at 08:10):

on a finite alphabet? yeah i think so

view this post on Zulip sarahzrf (Apr 04 2020 at 08:10):

but a turing machine is an intension, not an extension

view this post on Zulip T Murrills (Apr 04 2020 at 08:10):

hell yeah there it is!!!!

view this post on Zulip Verity Scheel (Apr 04 2020 at 08:11):

rest assured, I do not mean to ask questions that don’t typecheck, I just am generous with implicit verbiage

view this post on Zulip sarahzrf (Apr 04 2020 at 08:11):

there is no computable surjection (nat -> bool) -> nat, but there's certainly a computable surjection □(nat -> bool) -> nat

view this post on Zulip T Murrills (Apr 04 2020 at 08:12):

what’s box here, I’m used to seeing it as a model operator?

view this post on Zulip sarahzrf (Apr 04 2020 at 08:12):

where □ is the "intension"/"some code for" modality

view this post on Zulip T Murrills (Apr 04 2020 at 08:12):

ah nice!

view this post on Zulip sarahzrf (Apr 04 2020 at 08:12):

check this out T https://arxiv.org/abs/1703.01288

view this post on Zulip sarahzrf (Apr 04 2020 at 08:13):

this kind of interpretation of □ is a curry-howard transpose of provability logic

view this post on Zulip T Murrills (Apr 04 2020 at 08:13):

@sarahzrf omgomgomg. this is great.

view this post on Zulip sarahzrf (Apr 04 2020 at 08:13):

yeah :)

view this post on Zulip sarahzrf (Apr 04 2020 at 08:13):

i notiecd just earlier tonight that the author is on this server actually

view this post on Zulip sarahzrf (Apr 04 2020 at 08:14):

(altho i dont think that paper originates that interpretation of □? im not sure yeah it calls it an "old piece of programming lore" in the intro)

view this post on Zulip Verity Scheel (Apr 04 2020 at 08:16):

I think it’s related, but I never quite understood the technical details of Andrej’s work here: is he assuming that the functions are computable? http://math.andrej.com/2014/05/08/seemingly-impossible-proofs/

view this post on Zulip sarahzrf (Apr 04 2020 at 08:16):

nope!

view this post on Zulip sarahzrf (Apr 04 2020 at 08:16):

conats are omniscient in pretty bare constructive foundations

view this post on Zulip T Murrills (Apr 04 2020 at 08:17):

I also think the intensional/extensional dichotomy is but a precursor to some deeper philosophical understanding of relative intension/extension but

view this post on Zulip sarahzrf (Apr 04 2020 at 08:18):

oo https://www.cs.bham.ac.uk/~mhe/papers/negative-axioms.pdf

view this post on Zulip T Murrills (Apr 04 2020 at 08:20):

aw yeah :) I will read this but quick question: I never got why MLTT is called “intensional”. what is the connection to intension, exactly?

view this post on Zulip sarahzrf (Apr 04 2020 at 08:20):

it's in reference to the equality

view this post on Zulip sarahzrf (Apr 04 2020 at 08:20):

the equality in standard mltt is an intensional one

view this post on Zulip sarahzrf (Apr 04 2020 at 08:21):

you can only prove that things are equal if they are intensionally equal

view this post on Zulip sarahzrf (Apr 04 2020 at 08:21):

well, i say "things" when i mean "functions" mostly

view this post on Zulip T Murrills (Apr 04 2020 at 08:21):

and by intensionally equal we mean...equal by definitional equality?

view this post on Zulip sarahzrf (Apr 04 2020 at 08:21):

rrrrroughly

view this post on Zulip Verity Scheel (Apr 04 2020 at 08:22):

but here he’s assuming that functions are computable/continuous? http://math.andrej.com/2007/09/28/seemingly-impossible-functional-programs/

view this post on Zulip sarahzrf (Apr 04 2020 at 08:22):

yes, there he is

view this post on Zulip sarahzrf (Apr 04 2020 at 08:22):

gets more confusing when you have free variables in scope

view this post on Zulip T Murrills (Apr 04 2020 at 08:23):

Like, the prop equality holds only if a definitional equality holds? by definitional equality I mean something you can derive as a definitional equality judgment in the. you know. the inference system.

view this post on Zulip sarahzrf (Apr 04 2020 at 08:23):

but yeah we have that (there exists a term ee such that e:a=b\vdash e : a = b) iff (ab\vdash a \equiv b)

view this post on Zulip T Murrills (Apr 04 2020 at 08:23):

ah there it is cool cool

view this post on Zulip sarahzrf (Apr 04 2020 at 08:24):

if you put a Γ in front of the ⊢ then im not prepared to stake anything on it

view this post on Zulip sarahzrf (Apr 04 2020 at 08:24):

anyway this is actually kind of a problem

view this post on Zulip sarahzrf (Apr 04 2020 at 08:24):

if you want your defeq to be decidable, it implies that your propeq for closed terms is decidable

view this post on Zulip sarahzrf (Apr 04 2020 at 08:25):

which means that funext cannot be in your system, since propeq extensional eq of functions nat -> nat say is not decidable

view this post on Zulip sarahzrf (Apr 04 2020 at 08:25):

unless you wanna throw out canonicity or subject reduction or something, in order to break the equivalence i wrote

view this post on Zulip Verity Scheel (Apr 04 2020 at 08:25):

honestly, if you don’t have funext, you’re not doing math

view this post on Zulip T Murrills (Apr 04 2020 at 08:26):

okay, that feels like...an ad-hoc port of the concept of intensionality into Math if you know what I mean

Ah I’m gathering function extensionality introduces undecidability? think I’ve heard that, but not sure why

view this post on Zulip sarahzrf (Apr 04 2020 at 08:26):

guess im not doing math then

view this post on Zulip sarahzrf (Apr 04 2020 at 08:27):

see here https://github.com/coq/coq/wiki/extensional_equality

view this post on Zulip sarahzrf (Apr 04 2020 at 08:27):

thank u @Conor McBride ^__^

view this post on Zulip sarahzrf (Apr 04 2020 at 08:28):

anyway i shuold really fucking be going to sleep it's 4:30am

view this post on Zulip sarahzrf (Apr 04 2020 at 08:28):

:zzz:

view this post on Zulip T Murrills (Apr 04 2020 at 08:29):

In general, this convertibility check must work for any two extensionally equal functions and that is undecidable in general.

this is what I don’t get but it’s probably some diagonal (lawvere) thing I’m guessing

view this post on Zulip sarahzrf (Apr 04 2020 at 08:29):

no i just mean

view this post on Zulip sarahzrf (Apr 04 2020 at 08:29):

go try writing an Eq instance for Integer -> Integer in haskell

view this post on Zulip sarahzrf (Apr 04 2020 at 08:30):

see how it goes

view this post on Zulip T Murrills (Apr 04 2020 at 08:30):

is it just

view this post on Zulip sarahzrf (Apr 04 2020 at 08:30):

if you wanna be really concrete

view this post on Zulip T Murrills (Apr 04 2020 at 08:30):

you gotta check all the integers?

view this post on Zulip sarahzrf (Apr 04 2020 at 08:30):

set one of them to constant 0 and the other to 0 as long as your favorite turing machine has not halted

view this post on Zulip T Murrills (Apr 04 2020 at 08:30):

oh.

view this post on Zulip T Murrills (Apr 04 2020 at 08:31):

yeah. that’ll do it, lol

view this post on Zulip T Murrills (Apr 04 2020 at 08:31):

ok well I should sleep too

view this post on Zulip T Murrills (Apr 04 2020 at 08:31):

It is also 4:30

view this post on Zulip T Murrills (Apr 04 2020 at 08:31):

(here)

view this post on Zulip sarahzrf (Apr 04 2020 at 08:31):

EST represent

view this post on Zulip sarahzrf (Apr 04 2020 at 08:31):

:zzz:

view this post on Zulip T Murrills (Apr 04 2020 at 08:31):

:zzz:

view this post on Zulip Verity Scheel (Apr 04 2020 at 08:32):

I abandoned EST two weeks ago ... has my sleep schedule changed? nope!

view this post on Zulip Amar Hadzihasanovic (Apr 04 2020 at 09:43):

Rongmin Lu said:

Rongmin Lu said:

This story got me scared. Starting from 1993, multiple groups of mathematicians studied my paper at seminars and used it in their work and none of them noticed the mistake. And it clearly was not an accident. A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.

As I quoted above.

In that quote Voevodsky speaks of a different 1993 paper, in which he himself found a mistake a few years later. @John Baez was talking about the Kapranov-Voevodsky 1991 paper that was disproved by Simpson. From what I know, it is true that the latter had not received much attention.

view this post on Zulip Amar Hadzihasanovic (Apr 04 2020 at 09:44):

But it is true that Voevodsky seems to have thought that the 1991 paper was correct for many years (even doubting the Simpson counterexample for a while)...

view this post on Zulip Amar Hadzihasanovic (Apr 04 2020 at 09:50):

The 1991 paper is quite an “extraordinary” example in that it really contains no proper proofs, only sketches of arguments. So I agree with John that it would have been much to ask mathematicians who were already doubtful of the result to first reconstruct the supposed proof, and then find a mistake. :)

view this post on Zulip Joachim Kock (Apr 04 2020 at 10:10):

EDIT: while I was writing this lecture, Amar made some of the same points, in short and suitable ways :-)

It should be noted that Voevodsky talks about two different mistakes, of different mathematical nature, and occurring in very different sociological contexts.

One is a mistake made in a technical lemma in the field of motivic cohomology, a field involving top mathematicians at top universities, Voevodsky himself being a top expert getting later the Fields medal for his work -- everything top. This is the case mentioned where people held seminars all over the world, but did not detect the mistake, accepting the technical result without checking the proof, trusting Voevodsky's authority, and using it in subsequent work. Voevodsky himself found the gap and provided a correct proof. In the end it was just a technical mistake, but potentially affecting many people's work.

The other mistake is his result with Kapranov about strict \infty-groupoids realising all homotopy types. That's a very different context: it was around 1990, years before people like Baez-Dolan, Street, Batanin, Tamsamani and Simpson really got higher category theory started. It seems that not so many people took notice at the time, and it was published in Cahiers, a very good journal, but probably considered 'obscure' by most people outside category theory. Those who did take notice did maybe not fully believe the result: it was already known that the result could not be true with standard geometric realisation. The subtle point was that the KV argument used a non-standard geometric realisation. I suppose the reaction at the time was: 'maybe it is true, maybe not, but it is not about the standard geometric realisation we care about'.

Simpson's preprint contradicting Kapranov-Voevodsky came in 1998. He was not able to debug the KV proof and pinpoint a faulty deduction, which is difficult because the paper is very sketchy. But what he did was much better (unfortunately not mentioned in the various Voevodsky interviews): he found the reason it could not work, namely a problem with strict units, hence showing that the mistake is a conceptual one, not just a technical one. Simpson also conjectured that the mistake could be fixed if working with weak units instead of strict units.

Voevodsky said he was not convinced about the mistake until 15 years after Simpson's paper. One reason is of course: could Simpson's paper be trusted? In fact Simpson at that time was already experiencing all the trouble Voevodsky describes: that his papers were too difficult to check fully. (In fact, one of Simpson's main papers on higher categories [A closed model structure for n-categories, internal Hom n-stacks and generalized Seifert-Van Kampen (1997)] had a gap, that it took a 300-page PhD thesis to fill [Péllissier, 2002].) Simpson worked ten years on formalising proofs in Coq (long before Voevodsky), hoping that he would one day be able to certify his own papers. (He came as far as giving a fully certified proof of the Gabriel-Zisman localisation theorem in simplicial homotopy theory, and suggested that this was probably the first proof ever! -- Gabriel and Zisman themselves admit that the details are too unwieldy to write down.)

Coming back to the KV paper: I worked several years on Simpson's conjecture, and tried hard to understand the KV ideas, but ended up approaching the conjecture with other methods, and eventually drifted to other problems. I did not get deep enough into their proof to understand what went wrong. But now, over the past few years, Simon Henry has made important inroads to the theory, and proved a version of the conjecture. His approach follows the KV idea, but uses a lot of modern machinery. I don't know if he can pinpoint the precise mistake in KV, but in any case Henry's actual results are of course more important than the historical curiosity of what went wrong in the KV proof.

Oops, that's probably over the page limit :-( And sorry for the full stops. I hope it does not sound too much like a lecture.

view this post on Zulip Morgan Rogers (he/him) (Apr 04 2020 at 10:20):

sarahzrf said:

maybe i mean like... a lot of the thinking you do in programming is the same kind of thinking you do in math, the thing that is alien to many programmers is the way of seeing what it is that you're doing, maybe?

I think the style/type of maths that's taught before university, at least in the UK, trains people to be good programmers in this sense; the young people who are "good at maths" (in terms of exam success) are typically those that manage to internalise algorithms and understand how to solve the maths problems they're presented with rather than to understand what it is they're actually doing. Perhaps that's a cynical take on the situation, though.

view this post on Zulip Morgan Rogers (he/him) (Apr 04 2020 at 10:23):

sarahzrf said:

actually fuck youre right i cant compile my proofs :cry:
image.png

"Error: Pattern(omniscience(negd o P)) was not completely instantiated..." you almost got there, @sarahzrf , you almost made all academic pursuit irrelevant :rolling_on_the_floor_laughing:

view this post on Zulip Morgan Rogers (he/him) (Apr 04 2020 at 10:46):

Nicholas Scheel said:

it’s the classic argument of whether 19374829^163947^16373^16372 actually exists.

Funnily enough I was just wondering exactly that myself. What are the chances? :rolling_on_the_floor_laughing:

view this post on Zulip Fabrizio Genovese (Apr 04 2020 at 11:11):

sarahzrf said:

everything can be meta-theoretically countable

Actually everything infinite has a countable model, even better! :slight_smile:

view this post on Zulip (=_=) (Apr 04 2020 at 14:26):

Amar Hadzihasanovic said:

This story got me scared. Starting from 1993, multiple groups of mathematicians studied my paper at seminars and used it in their work and none of them noticed the mistake. And it clearly was not an accident. A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.

In that quote Voevodsky speaks of a different 1993 paper, in which he himself found a mistake a few years later. John Baez was talking about the Kapranov-Voevodsky 1991 paper that was disproved by Simpson. From what I know, it is true that the latter had not received much attention.

I hate chatrooms... :sweat_smile:

Here's what I wrote:

And the point I was trying to make is what drove Voevodsky to develop HoTT. He was terrified to find out that there was a flaw in his work and nobody had discovered the error, even though plenty of people had apparently read it.

Source: https://www.ias.edu/ideas/2014/voevodsky-origins

I know it was late at night his local time --- I was not expecting him to engage --- but I cited the very article that contained the quote about the 1993 paper. The article made it clear there were at least two papers where Voevodsky goofed. That was why I was so baffled when I responded like that.

Well, it is a nice little case study of how academia really works. Also, chatrooms are the worst (and best, arrrrghh...)

view this post on Zulip (=_=) (Apr 04 2020 at 14:29):

Amar Hadzihasanovic said:

But it is true that Voevodsky seems to have thought that the 1991 paper was correct for many years (even doubting the Simpson counterexample for a while)...

More reason in favour of the formalisation of mathematics. Again (because I've said this so many times elsewhere), not asking mathematicians to write code, just wishing that the tech will catch up to make the user experience more palatable.

view this post on Zulip (=_=) (Apr 04 2020 at 15:27):

Joachim Kock said:

EDIT: while I was writing this lecture, Amar made some of the same points, in short and suitable ways :-)

Yes, and I realised, as Amar had pointed out, that John had thought that I was talking about the 1991 paper, when I had meant the 1993 paper.

view this post on Zulip John Baez (Apr 04 2020 at 19:16):

Amar Hadzihasanovic said:

The 1991 paper is quite an “extraordinary” example in that it really contains no proper proofs, only sketches of arguments. So I agree with John that it would have been much to ask mathematicians who were already doubtful of the result to first reconstruct the supposed proof, and then find a mistake. :)

Yeah. And again, Carlos Simpson did write a paper in 1998 showing what everyone believed, namely that the result of the 1991 paper was false.

So, I think math was working just fine in this episode, albeit a bit slowly: if someone writes a sketchy paper with a wrong-looking result in a fairly esoteric area, other people aren't going to drop what they're doing and try to make the arguments precise in order to locate a flaw: most of them will just ignore this paper.

It's a bit like if a programmer released some hard-to-read code that claims to do something that's pretty obviously impossible: will other programmers spend time looking for a bug?

But okay, enough about that. What was the 1993 paper that Voevodsky was talking about, in which he himself found a mistake? Oh, I see @Joachim Kock explained it:

One is a mistake made in a technical lemma in the field of motivic cohomology, a field involving top mathematicians at top universities, Voevodsky himself being a top expert getting later the Fields medal for his work -- everything top. This is the case mentioned where people held seminars all over the world, but did not detect the mistake, accepting the technical result without checking the proof, trusting Voevodsky's authority, and using it in subsequent work. Voevodsky himself found the gap and provided a correct proof. In the end it was just a technical mistake, but potentially affecting many people's work.

view this post on Zulip Daniel Geisler (Apr 04 2020 at 19:46):

While writing large amounts of complex code I inevitably find bugs. This makes me feel that mathematicians are overly optimistic regarding the perfection of their work.

view this post on Zulip John Baez (Apr 04 2020 at 19:52):

I always feel computer scientists overestimate how much mathematicians think their work is perfect, or even care. It's not like anyone is gonna die if the proof of some esoteric conjecture later on turns out to have a flaw. If the result is important enough, people will keep looking at it and probably eventually find the mistake.

In really important subjects, like politics or economics, everything is and always has been a complete shitshow. It's sort of weird that people get freaked out by mistakes in proofs.

view this post on Zulip Nathaniel Virgo (Apr 04 2020 at 19:56):

Naïve set theory had a pretty massive bug, but a small team of hackers worked intensively on the issue and were able to roll out a patch.

view this post on Zulip Daniel Geisler (Apr 04 2020 at 19:59):

More mathematics simplifies older work than develops new work. Consider how current proofs of Fermat's Last Theorem are much shorter than the original proof.

view this post on Zulip sarahzrf (Apr 04 2020 at 20:19):

Morgan Rogers said:

sarahzrf said:

maybe i mean like... a lot of the thinking you do in programming is the same kind of thinking you do in math, the thing that is alien to many programmers is the way of seeing what it is that you're doing, maybe?

I think the style/type of maths that's taught before university, at least in the UK, trains people to be good programmers in this sense; the young people who are "good at maths" (in terms of exam success) are typically those that manage to internalise algorithms and understand how to solve the maths problems they're presented with rather than to understand what it is they're actually doing. Perhaps that's a cynical take on the situation, though.

this is the opposite of what i mean

view this post on Zulip sarahzrf (Apr 04 2020 at 20:21):

memorizing algorithms does not make you any better at programming than it does at math

view this post on Zulip sarahzrf (Apr 04 2020 at 20:22):

in programming more than in math, you can't get away with not understanding what it is that you're doing, because you can't fudge the details as much when you don't get what they mean

view this post on Zulip Matteo Capucci (he/him) (Apr 04 2020 at 20:24):

John Baez said:

I always feel computer scientists overestimate how much mathematicians think their work is perfect, or even care. It's not like anyone is gonna die if the proof of some esoteric conjecture later on turns out to have a flaw. If the result is important enough, people will keep looking at it and probably eventually find the mistake.

In really important subjects, like politics or economics, everything is and always has been a complete shitshow. It's sort of weird that people get freaked out by mistakes in proofs.

There was a good question on MO about how serious would it be to find an error in math. The top answer was: 'meh'. Because it's really hard for a mathematical subject to prove a conceptually wrong result and go on for a long time without completely falling apart. Usually errors are formal and not conceptual.
Moreover, unlike programming, math floats on a sea of informal abstraction which is flexible enough to avoid many of the complications of coding (e.g. there's no buffer overflow in math).

view this post on Zulip Matteo Capucci (he/him) (Apr 04 2020 at 20:24):

That said, I'm a big fan of computer formalization also for the purpose of error checking. But I don't live in fear.

view this post on Zulip John Baez (Apr 04 2020 at 20:26):

(e.g. there's no buffer overflow in math).

There is, but then I take a nap. :upside_down:

view this post on Zulip Matteo Capucci (he/him) (Apr 04 2020 at 20:28):

John Baez said:

(e.g. there's no buffer overflow in math).

There is, but then I take a nap. :upside_down:

Mmmh I think I don't know English well enough to get this pun :sweat_smile:

view this post on Zulip Matteo Capucci (he/him) (Apr 04 2020 at 20:28):

Or math

view this post on Zulip Matteo Capucci (he/him) (Apr 04 2020 at 20:28):

Or you

view this post on Zulip John Baez (Apr 04 2020 at 20:30):

It wasn't a pun. I just mean that when I'm doing math and my brain gets overloaded and things stop making sense, I take a nap... and when I wake up, things are usually better.

view this post on Zulip Matteo Capucci (he/him) (Apr 04 2020 at 20:31):

Oh well, that's the mathematician though! :laughing:

view this post on Zulip John Baez (Apr 04 2020 at 20:31):

Right, but math is just mathematicians doing stuff. Not to get all philosophical....

view this post on Zulip Daniel Geisler (Apr 05 2020 at 01:49):

As a teen I read about Poincare's remark about mathematician's being unable to view the source of their mathematical inspiration and thoughts. I learned yoga fifty years ago to learn how to view my mind's activities. This allows me to be inspired and to keep working over the decades when most people last a few years in the research of tetration. I see meta-cognition, generalization, and abstraction as vital to mathematical growth. I'm a mathematical hedonist, I do math because it feels wonderful.

view this post on Zulip (=_=) (Apr 05 2020 at 08:11):

Matteo Capucci said:

That said, I'm a big fan of computer formalization also for the purpose of error checking. But I don't live in fear.

That's great. But Voevodsky did, and that prompted him to eventually develop HoTT. Fear can be a useful thing.

The thing that I find unsatisfying about John's opinion here is that it was exactly the same opinion people used to have about code: "There's plenty of code that's not gonna kill anyone if there's a bug, so why are people so obsessed with debuggers anyway?"

I don't think that's productive. It's because people were obsessed with writing better tools for programmers to write better programs that programming has become a lot easier than before. And as it turns out, in our highly networked cloud computing age, a bug can mean a security vulnerability, or the difference between profit and loss.

view this post on Zulip (=_=) (Apr 05 2020 at 08:14):

John Baez said:

It's sort of weird that people get freaked out by mistakes in proofs.

Yeah, it's really weird how some people are freaking out about the allegations of mistakes in this proof.

view this post on Zulip Daniel Geisler (Apr 05 2020 at 08:17):

I've been waiting thirty years to get my research reviewed. Last chance for a human to review my work because within five to ten years computers will be reviewing people's mathematics.

view this post on Zulip (=_=) (Apr 05 2020 at 08:19):

Daniel Geisler said:

I've been waiting thirty years to get my research reviewed. Last chance for a human to review my work because within five to ten years computers will be reviewing people's mathematics.

Five to ten years would be awesome. I'm not too optimistic about that, since there are a lot of issues, both technological and organisational, to be addressed before that can happen.

view this post on Zulip Morgan Rogers (he/him) (Apr 05 2020 at 10:18):

sarahzrf said:

memorizing algorithms does not make you any better at programming than it does at math

I deliberately said "internalise" rather than "memorise", although to be fair the latter would have been almost as applicable in my high school maths example. I mean we get trained to solve a specific form of problem very efficiently, without any of the creativity being directed into reformulating or expanding the question, which is the form of constraint that my software dev friends seem to deal with (albeit with a much larger solution space to work in), whereas half of the work that I do goes into working out which questions to ask. In any case I'm going to stop starting sentences with "in computer science"; I am not a computer scientist, and in spite of having such friends am not qualified to makes such comments. :sweat_smile:

view this post on Zulip Mike Stay (Apr 05 2020 at 19:49):

There's something called "plumbing" in the work of programming that's figuring out how to fit different systems together. It's dealing with DLL hell, and what passes for build systems (I'm looking at you, maven and sbt), and translating between different message formats, and knowing about the difference in access times between a register and the cache and main memory and disk and network, and being able to adapt old build scripts to new systems, and all of that stuff that mathematicians (for the most part) have to idealize away in order to keep the complexity low enough that they can reason about it.

There are also mathematicians who want to make the programming languages they have to use as much like the math they enjoy as possible. Unfortunately, sometimes the compilers and the execution model aren't up to it. Scala written like Haskell performs terribly and tends to find the problems in the type system (I'm looking at you, internal classes).

So there are those programmers whose experience with math has been that it's simply too idealized to be useful in their job and, when used without a deep understanding of the abstractions layered on top of the hardware, often leads to very slow code.

view this post on Zulip sarahzrf (Apr 05 2020 at 23:36):

theres just, a lot of things

view this post on Zulip sarahzrf (Apr 05 2020 at 23:36):

some of them legitimate, some not

view this post on Zulip sarahzrf (Apr 05 2020 at 23:36):

and they all tend to get blended together ;-;

view this post on Zulip Morgan Rogers (he/him) (Apr 06 2020 at 10:12):

Probably also that we mathematicians can be downright insufferable :stuck_out_tongue:

view this post on Zulip Mike Shulman (Apr 06 2020 at 22:42):

@Rongmin Lu One crucial difference between mathematics and code is that code that is conceptually correct but contains technical errors can create big problems in the real world as soon as it is put into production; whereas mathematics that is conceptually correct but contains technical errors needs to be fixed eventually, but as long as it is eventually fixed no lasting damage is done (e.g. it doesn't create security holes through which malicious actors can hack into mathematics).

view this post on Zulip Mike Shulman (Apr 06 2020 at 22:42):

I wrote a fairly extensive MO answer about mistakes in mathematics a bit ago.

view this post on Zulip (=_=) (Apr 07 2020 at 04:57):

Mike Shulman said:

Rongmin Lu One crucial difference between mathematics and code is that code that is conceptually correct but contains technical errors can create big problems in the real world as soon as it is put into production; whereas mathematics that is conceptually correct but contains technical errors needs to be fixed eventually, but as long as it is eventually fixed no lasting damage is done (e.g. it doesn't create security holes through which malicious actors can hack into mathematics).

The problem is with "eventually". The long-running saga that started around the same time as the HoTT special year may have done lasting damage to pure mathematics, precisely because "eventually" took too long. I think there's been damage done because the most recent instalment of that saga led to people talking about "socially constructed mathematics" again. It's not the first time someone has brought up that argument, and it probably won't be the last. That IS damage.

view this post on Zulip (=_=) (Apr 07 2020 at 05:01):

People who write code also care about it even when it's not put into production: see the open source software movement. Github provides a reasonably nice model of how to build a system that provides timely feedback and allows others to fix technical errors in a visible manner. The problems that Voevodsky had described in the IAS article concerning his 1991 and 1993 papers could easily have been fixed if the community had had a Github equivalent to report and fix errors in mathematics.

view this post on Zulip Mike Shulman (Apr 08 2020 at 19:32):

What "saga" are you referring to?

view this post on Zulip Mike Shulman (Apr 08 2020 at 19:34):

Rongmin Lu said:

People who write code also care about it even when it's not put into production: see the open source software movement.

Of course, by "in production" I meant simply "being used". Open source software certainly is that.

The problems that Voevodsky had described in the IAS article concerning his 1991 and 1993 papers could easily have been fixed if the community had had a Github equivalent to report and fix errors in mathematics.

That's a bold claim. I doubt it. What evidence do you offer?

view this post on Zulip Lê Thành Dũng (Tito) Nguyễn (Apr 08 2020 at 19:44):

Mike Shulman said:

What "saga" are you referring to?

I assumed he meant the ABC conjecture https://www.math.columbia.edu/~woit/wordpress/?p=11709

view this post on Zulip sarahzrf (Apr 08 2020 at 20:03):

@Rongmin Lu
u might find this funny, but i was kinda assuming for a while that you weren't a programmer based on how positive your characterization of the programming world is :upside_down:

view this post on Zulip sarahzrf (Apr 08 2020 at 20:03):

like, an outsider's idealism

view this post on Zulip sarahzrf (Apr 08 2020 at 20:03):

in some ways at least

view this post on Zulip (=_=) (Apr 09 2020 at 03:16):

sarahzrf said:

u might find this funny, but i was kinda assuming for a while that you weren't a programmer based on how positive your characterization of the programming world is :upside_down:

I'm very aware of all the downsides of the programming world. However, as I had said earlier, I was merely pointing all the things that are available that can make a programmer's life much easier, things that mathematics currently does not have.

The fact that large swathes of the programming world may not make much use of these tools is testament to Gibson's observation that "the future is here, it's just not evenly distributed".

view this post on Zulip (=_=) (Apr 09 2020 at 03:22):

Mike Shulman said:

Of course, by "in production" I meant simply "being used". Open source software certainly is that.

By "in production", programmers usually mean code that's being deployed to be used actively in a business. There are plenty of Github repos that are not "being used" in that sense of being "in production". Some repos even put disclaimers discouraging you from using them because they're not production-ready yet.

view this post on Zulip Mike Shulman (Apr 09 2020 at 03:40):

@Rongmin Lu Well, not being a programmer, I apologize for using the jargon wrong. But I thought the meaning would be clear from the point I was making, since using code in the real world is obviously sufficient for bugs to cause real problems, whether or not it is "deployed" (whatever that means in your jargon) to be used actively in a "business" (however that word is defined).

view this post on Zulip (=_=) (Apr 09 2020 at 03:48):

Mike Shulman said:

Well, not being a programmer, I apologize for using the jargon wrong. But I thought the meaning would be clear from the point I was making, since using code in the real world is obviously sufficient for bugs to cause real problems, whether or not it is "deployed" (whatever that means in your jargon) to be used actively in a "business" (however that word is defined).

Sure, but my contention was that not all open source software is being used actively in the real world. There are lots of reasons why people publish open source software on Github, and one of them is to invite feedback because the code isn't ready to be used in the real world. Lots of code on Github is just research code or personal projects. It's still open source software.

view this post on Zulip (=_=) (Apr 09 2020 at 04:27):

Mike Shulman said:

Rongmin Lu said:

The problems that Voevodsky had described in the IAS article concerning his 1991 and 1993 papers could easily have been fixed if the community had had a Github equivalent to report and fix errors in mathematics.

That's a bold claim. I doubt it. What evidence do you offer?

Since you have experience using Github for the HoTT book, perhaps you could tell us how difficult it was to use Github to report and fix errors during the project?

Perhaps you can tell us how Github still creates the kind of problems that Voevodsky had identified in his IAS article:

My claim is a counterfactual. You have actual experience using Github for a real-life project. What evidence do you offer to counter my argument?