Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: community: general

Topic: P vs NP


view this post on Zulip Paul Blain Levy (Apr 05 2020 at 14:32):

Fabrizio Genovese said:

Yes, I was referring to the fact that nearly everyone believes P to be not equal to NP. The only notable exception I know of is Donald Knuth, which anyway believes that "P = NP, but an algorithm to reduce a NP-complete problem to a P one is too complicated for humans to be ever able to find it", which means, in practice, that for many things he still believes you can safely rely on what P != NP implies

view this post on Zulip Paul Blain Levy (Apr 05 2020 at 14:33):

@Fabrizio Genovese said:

Yes, I was referring to the fact that nearly everyone believes P to be not equal to NP.

They do? What is your evidence for this? I personally don't believe P =/= NP, and also don't believe P=NP. In my view, it's an open question.

view this post on Zulip Morgan Rogers (he/him) (Apr 05 2020 at 14:45):

I saw a youtube video recently on P vs NP that quotes this survey in which a large majority of respondents believe P!=NP.

view this post on Zulip Fabrizio Genovese (Apr 05 2020 at 14:47):

Well, everyone in cryptography does, basically all modern-day crypto infrastructure relies on the fact that some algorithms are in P with inverse in NP, such as the discrete logarithm problem

view this post on Zulip Paul Blain Levy (Apr 05 2020 at 14:51):

What matters for encryption is that no feasible solution to these problems will ever be found. And by "feasible" I mean in the ordinary woolly sense, not "polytime". A feasible non-polytime solution would damage encryption. An infeasible polytime solution would not. Nor would it damage encryption if a feasible solution exists but nobody ever finds it.

view this post on Zulip Morgan Rogers (he/him) (Apr 05 2020 at 14:53):

Paul Blain Levy said:

Fabrizio Genovese said:

Yes, I was referring to the fact that nearly everyone believes P to be not equal to NP.

They do? What is your evidence for this? I personally don't believe P =/= NP, and also don't believe P=NP. In my view, it's an open question.

I would argue that this is a little different to the previous discussion about models of PA, in that P vs NP is a question about our eventual ability to do efficiently things that we already know we can do, and can be restated (thanks to the existence of NP-complete problems) in terms of our ability to algorithmically solve just a single, concrete, scalable problem. Thus it's a lot more reasonable to assume it's true or false, that "the answer is out there" before it is proved either way, or even if it never is.

view this post on Zulip Paul Blain Levy (Apr 05 2020 at 14:56):

Morgan Rogers said:

Paul Blain Levy said:

Fabrizio Genovese said:

Yes, I was referring to the fact that nearly everyone believes P to be not equal to NP.

They do? What is your evidence for this? I personally don't believe P =/= NP, and also don't believe P=NP. In my view, it's an open question.

I would argue that this is a little different to the previous discussion about models of PA, in that P vs NP is a question about our eventual ability to do efficiently things that we already know we can do, and can be restated (thanks to the existence of NP-complete problems) in terms of our ability to algorithmically solve just a single, concrete, scalable problem. Thus it's a lot more reasonable to assume it's true or false, that "the answer is out there" before it is proved either way, or even if it never is.

P=/=NP is an arithmetical statement, more precisely a Π20\Pi^0_2 statement. As such, I strongly believe that it is bivalent (objectively true or false). If you thought otherwise, you may have misread my message.

view this post on Zulip Reid Barton (Apr 05 2020 at 14:56):

There are different degrees of belief, though, and it's completely reasonable to believe neither "P" nor "not P" if one means a sufficiently strong degree of belief.

view this post on Zulip Morgan Rogers (he/him) (Apr 05 2020 at 15:01):

Ah, sorry, the epistemic logic system I was using to parse what you said was too restrictive, I was too quick to commute "not" and "believe". :joy:

view this post on Zulip Reid Barton (Apr 05 2020 at 15:05):

Is whether @Paul Blain Levy believes P=NP an arithmetic statement? :upside_down:

view this post on Zulip Paul Blain Levy (Apr 05 2020 at 15:07):

Morgan Rogers said:

I saw a youtube video recently on P vs NP that quotes this survey in which a large majority of respondents believe P!=NP.

I suspect that a person who believes P=/=NP, and/or is happy to identify "feasible" with "polytime", is more likely to choose to work in complexity theory than someone who lacks these views. So there may be a selection bias in this survey.

view this post on Zulip Fabrizio Genovese (Apr 05 2020 at 15:12):

Paul Blain Levy said:

What matters for encryption is that no feasible solution to these problems will ever be found. And by "feasible" I mean in the ordinary woolly sense, not "polytime". A feasible non-polytime solution would damage encryption. An infeasible polytime solution would not. Nor would it damage encryption if a feasible solution exists but nobody ever finds it.

A feasible non-polytime solution can be acceptable. If it depends exponentially on the size of the input, you just pick a bigger input. This is precisely what we do with, say, RSA, where every once in a while experts suggest to pick keys of bigger size

view this post on Zulip Morgan Rogers (he/him) (Apr 05 2020 at 15:13):

Paul Blain Levy said:

Morgan Rogers said:

I saw a youtube video recently on P vs NP that quotes this survey in which a large majority of respondents believe P!=NP.

I suspect that a person who believes P=/=NP, and/or is happy to identify "feasible" with "polytime", is more likely to choose to work in complexity theory than someone who lacks these views. So there may be a selection bias in this survey.

:shrug: an argument could be made that equating polytime with "eventually feasible" isn't such a stretch.

view this post on Zulip Reid Barton (Apr 05 2020 at 15:22):

https://eprint.iacr.org/2020/196.pdf is a cryptography paper which suggests treating a problem with known polytime algorithms (namely counting the number of points on the Jacobian of a hyperelliptic curve of genus 3) as practically infeasible. I think the known algorithms are somewhere in the range O(n18)O(n^{18}) to O(n24)O(n^{24}) where n=logpn = \log p.

view this post on Zulip Reid Barton (Apr 05 2020 at 15:23):

Needless to say, there are mixed opinions about this

view this post on Zulip Morgan Rogers (he/him) (Apr 05 2020 at 15:36):

It may be set to stall for a while in the near future, but the exponential growth of computer power is pretty well documented. Sure, "eventually" might be very far in the future, and for some algorithms like the one in the paper you cite might never arrive or require some major optimization to avoid needing all of the atoms in the universe to perform the computation, but the point is that whenever feasibility is reached for a given polytime algorithm, there are major consequences, and you can't know before a proof exists how far from "eventually" we might be for a given problem.

view this post on Zulip Reid Barton (Apr 05 2020 at 15:39):

I think the intended application for the paper I linked to is one where future-proofness is not essential (it does the attacker no good to be able to solve the problem 20 years from now, or even 20 days) and then it's much easier to tune the security parameter to make, say, O(n2)O(n^2) feasible but O(n18)O(n^{18}) infeasible.

view this post on Zulip Reid Barton (Apr 05 2020 at 15:40):

Haha, these exponents are so big I keep forgetting I need to put { } around them.

view this post on Zulip Reid Barton (Apr 05 2020 at 15:43):

Okay, reading the first sentence of the paper, what I wrote above is true for some applications (delay functions) but not for others (accumulators).

view this post on Zulip Fabrizio Genovese (Apr 05 2020 at 16:18):

VDFs are really _really_ cool

view this post on Zulip sarahzrf (Apr 06 2020 at 03:56):

Morgan Rogers said:

Ah, sorry, the epistemic logic system I was using to parse what you said was too restrictive, I was too quick to commute "not" and "believe". :joy:

obviously @Paul Blain Levy's beliefs form an ultrafilter :thinking:

view this post on Zulip dusko (Apr 20 2020 at 01:06):

this thread seems to be dead [bad joke removed].

but while the P vs NP problem might be a dead end by the usual low-level-programming methods of complexity theory, i think there are a couple of things that might be worth young category theorists' attention :)

1) the P vs NP question is a popular vista point on the tip of the iceberg of the question what can be feasibly computed, and what can be feasibly hidden. there are a couple of layers on the iceberg that have been spelled out, but ground to a halt in the low-level mess.

suppose that all problems in P are easy. P=NP then means that there are no hard problems: you give me a source, say a ciphertext, and i can compute any information that it contains. (such was shannon's world: if there is a statistical correlation, the cryptanalyst will find a lever to move the information.)

however, it may happen that P=/=NP but AP=ANP. that means that some problems have hard instances, but they are easy on average. hard instances infeasible to find. (this is what worried leonid levin, when he proceeded, after his very elegant "cook-levin thesis", to develop the very messy average-case complexity theory.)

and then there is impagliazzo's thesis. suppose that we have a problem that is hard on the average, and thus ANP=/=AP. the chance that we can guess hard instances of a problem is not always correlated with the chance that we can guess solved instances. so it may happen that there are plenty of hard problems, but that it is unfeasible to construct one-way functions. (levin constructed his universal one-way function trying to derive it from a vanilla ANP problem, but failed.)

and to split the last hair, suppose there are plenty of hard problems with solutions, which allows you to construct one-way functions. to make it into a trapdoor function, you need to provide a succinct encoding of the solution. aka the KEY of the cipher. is it not possible that we can construct lots of one-way functions, but the size of their solutions for the hard direction grows intractably, and we still cannot build crypto systems with short keys?

well yes, we know at least one line of constructions that would make this possible. which brings us to (2).

but i don't want to bore you. this is already too long, and prolly too heavy. if anyone is interested, i can tell more l8r.

view this post on Zulip John Baez (Apr 20 2020 at 01:17):

This is reminiscent of Impagliazzo's "five worlds": Algorithmica, Heuristica, Pessiland, Minicrypt, and Cryptomania:

What do you think a "young category theorist" (or even an old and tired one) might contribute here?

view this post on Zulip dusko (Apr 20 2020 at 01:54):

John Baez said:

This is reminiscent of Impagliazzo's "five worlds": Algorithmica, Heuristica, Pessiland, Minicrypt, and Cryptomania:

What do you think a "young category theorist" (or even an old and tired one) might contribute here?

yes, it is impagliazzo's universes. the task was to prove that they are different. impagliazzo worked distinguishing two of them, levin other two, widgerson other two. impagliazzo told the story in terms of universes. it's an unusually clear vision. all of the tasks are still open, certainly not for a lack of effort, or brilliance. but some problems require method, not cleverness. i tend to think that category theory might be a good candidate for providing high level method.

view this post on Zulip dusko (Apr 20 2020 at 02:03):

in fact, there has been a bit of progress. we do have the robertson-seymour theory providing family of algorithms where everything is in P, but cannot be succinctly encoded. so it is possible that our current hard problems with succinct certificates are not hard, and that there are hard problems, but no succinct certificates. or that there there are no hard problems, but we will never know, because the solutions of easy problems are too big. that's knuth.

view this post on Zulip dusko (Apr 20 2020 at 02:10):

in any case, i am somehow thinking that category theory has been happier when it made complicated things simple than when it proceeded to make itself complicated. and complexity theory and cryptography have made themselves so complicated that they pertty much ground to a halt. so if categories helped with programming languages, they might help with algorithms.

view this post on Zulip (=_=) (Apr 20 2020 at 07:38):

John Baez said:

What do you think a "young category theorist" (or even an old and tired one) might contribute here?

Possibly quite a bit. Some thoughts:

view this post on Zulip (=_=) (Apr 20 2020 at 12:50):

dusko said:

so if categories helped with programming languages, they might help with algorithms.

I guess one basic question that I, and perhaps a few others, have is this: how would one go about defining complexity classes in category-theoretic terms?

view this post on Zulip Fabrizio Genovese (Apr 20 2020 at 14:37):

Heh, that's one of the hardest questions in ACT if you ask me

view this post on Zulip Valeria de Paiva (Apr 20 2020 at 18:19):

Rongmin Lu said:

John Baez said:

What do you think a "young category theorist" (or even an old and tired one) might contribute here?

Possibly quite a bit. Some thoughts:

yes, this paper of Andreas is very tantalizing. I've don some work with Samuel Gomes da Silva on applications of Blass' ideas to set theory (Dialectica categories, cardinalities of the continuum and combinatorics of ideals,, 2018), but I don't know of any work on complexity theory, stemming from Blass.

view this post on Zulip dusko (Apr 20 2020 at 21:29):

Rongmin Lu said:

dusko said:

so if categories helped with programming languages, they might help with algorithms.

I guess one basic question that I, and perhaps a few others, have is this: how would one go about defining complexity classes in category-theoretic terms?

That is a nice concrete technical question. I have a concrete technical answer. I hope i'll upload it to arxiv within a month or so. It's a whole textbook. I have had the "good fortune" to be tasked with teaching computability and complexity every once in a while since 1995. Since roughly 2008, the course was mostly in string diagrams. Since 2016, it's been an undergraduate course. You never say the word "category", but draw pictures. Acceptable systems of indices are a simple monoidal structure. Then complexity measures are projections of Kleene's normal form. That is the idea. But making it useful takes a while. The early versions were pretty silly, and maybe the current one still is. I am rewriting the final chapter 4th time. That's not a good sign. But that can only be my own personal silliness. There must be a way to do complexity theory categorically, just like there must be a way for objects heavier than air, and different from birds, to fly in the air. It just takes time.

view this post on Zulip dusko (Apr 20 2020 at 21:53):

ah, i missed this:
John Baez said:

[snip]
What do you think a "young category theorist" (or even an old and tired one) might contribute here?

if there is a chance that old and tired category theorists might be interested to contribute here, then i would try to spell it out in more detail. there are aspects where that kind of investment would i think make a difference.

view this post on Zulip Mike Shulman (Apr 20 2020 at 21:58):

Perhaps some of the work on soft/light linear logics could be relevant?

view this post on Zulip Lê Thành Dũng (Tito) Nguyễn (Apr 20 2020 at 22:17):

On the topic of complexity theory vs categories, Damiano Mazza has been looking recently at connections between Constraint Satisfaction Problems and topos theory: https://www.tcs.ifi.lmu.de/research/dice-fopara2019/papers/towards-a-sheaf-theoretic-definition-of-decision-problems
(IIRC @Luc Chabassier worked on understanding those things at some point)

view this post on Zulip sarahzrf (Apr 20 2020 at 22:54):

in the vein of linear logic being applicable: a while back i sat down & rephrased bellantoni & cook's characterization of the ptime functions to be more PL-style and it looked an awful lot like a modal type theory
image.png

view this post on Zulip sarahzrf (Apr 20 2020 at 22:57):

(assuming i didn't mess up the translation, a function 2* → 2* should be ptime iff it is the semantics of some well-typed term here)

view this post on Zulip sarahzrf (Apr 20 2020 at 22:58):

ugh im noticing some typos -.-

view this post on Zulip Pastel Raschke (Apr 20 2020 at 23:17):

https://arxiv.org/abs/1908.04922v1
in the setting of pointer recursive functions (proven equivalent to partial recursive functions), tropical tiering (inspired by martin hofmann's non-size-increasing types) captures several sub-polynomial classes

view this post on Zulip (=_=) (Apr 21 2020 at 05:43):

sarahzrf said:

in the vein of linear logic being applicable: a while back i sat down & rephrased bellantoni & cook's characterization of the ptime functions to be more PL-style and it looked an awful lot like a modal type theory
image.png

Martin Hoffmann has written a couple of things about Bellantoni-Cook: here and here.

See also the link posted by Pastel Raschke above.

view this post on Zulip (=_=) (Apr 21 2020 at 06:13):

Valeria de Paiva said:

I don't know of any work on complexity theory, stemming from Blass.

I was also quite surprised by what he has done. The list only goes back to 1991, Google Scholar points to papers before that time. I've restricted the search to papers with Yuri Gurevich, who's consistently appeared in the complexity theory papers of Blass.

view this post on Zulip sarahzrf (Apr 21 2020 at 06:19):

omg those hoffmann papers look sick

view this post on Zulip sarahzrf (Apr 21 2020 at 06:19):

god i have too many papers open in tabs ill probably never get around to them :sob:

view this post on Zulip (=_=) (Apr 21 2020 at 06:22):

sarahzrf said:

god i have too many papers open in tabs ill probably never get around to them :sob:

There's this nifty feature called a bookmark. Also, you could probably code a browser extension that works like a spaced repetition system to remind you of papers you want to read, something like Anki but for bookmarks.

view this post on Zulip John Baez (Apr 21 2020 at 06:23):

Yeah, you never need to actually read the stuff. :upside_down:

view this post on Zulip sarahzrf (Apr 21 2020 at 06:28):

hgghhh⎄

view this post on Zulip Stelios Tsampas (Apr 21 2020 at 12:53):

Rongmin Lu said:

dusko said:

so if categories helped with programming languages, they might help with algorithms.

I guess one basic question that I, and perhaps a few others, have is this: how would one go about defining complexity classes in category-theoretic terms?

I can tell you what will not help: a too general of a category.

Starting from , say, a preorder to represent complexity classes sounds destined for failure. The question of complexity (and by extension the P vs NP problem) is a matter of the internals of algorithms. Forgetting about said internals sounds like a very bad idea. So if I were to do so, I would start from a category that can concretely represent algorithms, programs and take it from there. But I am not going to do so because there are better things to do than being humbled to oblivion by the enormity of this historic problem. ...Or are there really?

view this post on Zulip (=_=) (Apr 21 2020 at 13:21):

Stelios Tsampas said:

Starting from , say, a preorder to represent complexity classes sounds destined for failure. The question of complexity (and by extension the P vs NP problem) is a matter of the internals of algorithms. Forgetting about said internals sounds like a very bad idea. So if I were to do so, I would start from a category that can concretely represent algorithms, programs and take it from there. But I am not going to do so because there are better things to do than being humbled to oblivion by the enormity of this historic problem. ...Or are there really?

Yup, that's implicit in my question. I've seen several examples of these, so it's made me cautiously optimistic: cautious because people tend to want to shoehorn categories in without regard for the details, but optimistic because eventually a solution would present itself.

One example is in applying CT to ML. Fong et al. wasn't the first attempt at doing so, but I think they're probably on the right track (subject to further refinements), compared to previous iterations, because of the massive interactions their work has had with other relevant areas.

So I'm definitely hoping for something along the lines of Fong et al., but for complexity classes.

I mean, if you want to consider other things, try rewriting analysis in CT. :sweat_smile:

view this post on Zulip Amar Hadzihasanovic (Apr 21 2020 at 13:27):

I remember a comment by Scott Aaronson that half the wrong proofs he gets sent of P \neq NP use category theory, and all of them are disproven immediately by the fact that they would also prove P \neq NP relative to any oracle, which is false.
I read it as a suggestion that the Grothendieckian “relative point of view” fails with this problem... at least if one takes oracles as “base spaces” for categorical complexity.

view this post on Zulip Amar Hadzihasanovic (Apr 21 2020 at 13:28):

(I think Stelios' comment is making a similar observation, if not the same -- not claiming I'm adding much)

view this post on Zulip Amar Hadzihasanovic (Apr 21 2020 at 13:35):

I find it amusing that the situation seems to be the opposite of the Riemann hypothesis.
The quest for “the field with one element” is an attempt to find the right context in which the “relative point of view” will extend so that the the proof for curves over finite fields will extend to all global fields.
Whereas part of the difficulty of proving P vs NP is about containing the “relative point of view” so it doesn't kick in to invalidate a proof by making it parametric over oracles.

view this post on Zulip Fabrizio Genovese (Apr 21 2020 at 13:54):

Amar Hadzihasanovic said:

I remember a comment by Scott Aaronson that half the wrong proofs he gets sent of P \neq NP use category theory, and all of them are disproven immediately by the fact that they would also prove P \neq NP relative to any oracle, which is false.
I read it as a suggestion that the Grothendieckian “relative point of view” fails with this problem... at least if one takes oracles as “base spaces” for categorical complexity.

This is interesting, I didn't think people were trying to use cats to solve P!=NP at all!

view this post on Zulip Amar Hadzihasanovic (Apr 21 2020 at 14:03):

I'm afraid that the implication was that cranks (or semicranks) are trying to use cats to solve P \neq NP. Not sure about “serious” attempts. :)

view this post on Zulip Amar Hadzihasanovic (Apr 21 2020 at 14:06):

Of course there's the problem that most “serious” mathematicians would never openly state that they are trying to solve P vs NP for fear of looking like cranks.
Unless they are using algebraic geometry, which automatically makes everything respectable.

view this post on Zulip Fabrizio Genovese (Apr 21 2020 at 14:12):

Amar Hadzihasanovic said:

Unless they are using algebraic geometry, which automatically makes everything respectable.

It's funny because it's true :D

view this post on Zulip Stelios Tsampas (Apr 21 2020 at 14:14):

Rongmin Lu said:

Stelios Tsampas said:

Starting from , say, a preorder to represent complexity classes sounds destined for failure. The question of complexity (and by extension the P vs NP problem) is a matter of the internals of algorithms. Forgetting about said internals sounds like a very bad idea. So if I were to do so, I would start from a category that can concretely represent algorithms, programs and take it from there. But I am not going to do so because there are better things to do than being humbled to oblivion by the enormity of this historic problem. ...Or are there really?

Yup, that's implicit in my question. I've seen several examples of these, so it's made me cautiously optimistic: cautious because people tend to want to shoehorn categories in without regard for the details, but optimistic because eventually a solution would present itself.

One example is in applying CT to ML. Fong et al. wasn't the first attempt at doing so, but I think they're probably on the right track (subject to further refinements), compared to previous iterations, because of the massive interactions their work has had with other relevant areas.

So I'm definitely hoping for something along the lines of Fong et al., but for complexity classes.

I mean, if you want to consider other things, try rewriting analysis in CT. :sweat_smile:

Agreed :). Could you point me to the work by Fong et. al. you mention?

view this post on Zulip Oscar Cunningham (Apr 21 2020 at 14:15):

If somebody does use category theory to prove P=NP they should title the paper 'That which is Trivial is Trivially Trivial'.

view this post on Zulip (=_=) (Apr 21 2020 at 15:52):

Stelios Tsampas said:

Agreed :). Could you point me to the work by Fong et. al. you mention?

Sorry, it's better known as Backprop as Functor (arXiv, LICS'19).

I should say that I think there's room for improvement, but it's sparked off work on lenses and optics, as well as game theory. Jules Hedges has a nice blog about this development.

view this post on Zulip (=_=) (Apr 21 2020 at 15:54):

Amar Hadzihasanovic said:

I remember a comment by Scott Aaronson that half the wrong proofs he gets sent of PNP P \neq NP use category theory, and all of them are disproven immediately by the fact that they would also prove PNP P \neq NP relative to any oracle, which is false.

That's probably a crucial observation to keep in mind.

view this post on Zulip (=_=) (Apr 21 2020 at 16:13):

Amar Hadzihasanovic said:

I find it amusing that the situation seems to be the opposite of the Riemann hypothesis.
The quest for “the field with one element” is an attempt to find the right context in which the “relative point of view” will extend so that the the proof for curves over finite fields will extend to all global fields.
Whereas part of the difficulty of proving P vs NP is about containing the “relative point of view” so it doesn't kick in to invalidate a proof by making it parametric over oracles.

So my take on this is that we don't focus on P and NP, but consider larger complexity classes first, before drilling down to them once we get a satisfactory theory.

In January, Ji et al. posted a preprint announcing that MIP=RE MIP^* = RE . Scott Aaronson was really excited about it, and no wonder: if true, it would resolve the Connes embedding problem for von Neumann algebras in the negative.

What's interesting about the class MIP MIP^* is that it is defined in terms of games and quantum entanglement: this blog goes into more details. So it seems plausible to me that we could extend the ACT we have now to try to define this class. But that's just pure speculation on my part, of course.

view this post on Zulip Morgan Rogers (he/him) (Apr 21 2020 at 16:34):

Amar Hadzihasanovic said:

I remember a comment by Scott Aaronson that half the wrong proofs he gets sent of P \neq NP use category theory, and all of them are disproven immediately by the fact that they would also prove P \neq NP relative to any oracle, which is false. I read it as a suggestion that the Grothendieckian “relative point of view” fails with this problem... at least if one takes oracles as “base spaces” for categorical complexity.

I found an explanation of what you mean by this on the P versus NP wikipedia page under "Results about difficulty of proof". Indeed, the summary there reads just like your interpretation, but to me it seems like exactly the opposite: the massive strength of relativisation is that, when done carefully, it precisely controlled the parametricity of proofs. For example, in topos theory, we have (informally speaking) that everything than can be proven constructively applies over every topos, but anything requiring excluded middle applies only over Boolean toposes, anything relying on de Morgan's law applies only over de Morgan toposes etc. If one defines oracles as "base objects" with enough structure that cases can similarly be distinguished (I'm not going to push toposes, but... they are right there...), it seems like a great approach to take, especially given that complexity really should be a relative concept, even in applications: if we find a black box that makes some algorithm really cheap to run, that necessarily changes the complexity theory landscape.

view this post on Zulip dusko (Apr 21 2020 at 17:20):

Amar Hadzihasanovic said:

I remember a comment by Scott Aaronson that half the wrong proofs he gets sent of P \neq NP use category theory, and all of them are disproven immediately by the fact that they would also prove P \neq NP relative to any oracle, which is false.
I read it as a suggestion that the Grothendieckian “relative point of view” fails with this problem... at least if one takes oracles as “base spaces” for categorical complexity.

Aren't both these arguments in the form: "Here is a way to prove A. But here is why that proof would not work. Therefore, A is false." Scott Aaronson's book is a compendium of such arguments. He is a fantastic presenter, but gets taken by his own energy. There are many ways to relativize complexity besides oracles and Turing degrees. In fact, there is a complexity research community that thinks that the whole reason why it got stuck was that it has been worked out without the relativizations needed to express that a problem is hard in one parameter but easy in another. Check parametrized complexity. There are textbooks by Martin Grohe, Downey, Fellows and Downey.

view this post on Zulip dusko (Apr 21 2020 at 17:29):

The main reason why it is hard to put category theory and algorithmics together is that both are branches of mathematics theorized about to a large extent in English. But people philosophize about algorithms because the actual models are too verbose because they are low level, and categories get verbose for different reasons. In any case, people seem much more likely to be impressionistic about algorithms and about categories than about, say, neural nets, or about heart diseases, where they seem to be much more likely to grab tools than to talk. That is at least my impression ;)

view this post on Zulip Valeria de Paiva (Apr 21 2020 at 19:05):

Rongmin Lu said:

Valeria de Paiva said:

I don't know of any work on complexity theory, stemming from Blass.

I was also quite surprised by what he has done. The list only goes back to 1991, Google Scholar points to papers before that time. I've restricted the search to papers with Yuri Gurevich, who's consistently appeared in the complexity theory papers of Blass.

well, indeed Andreas work is formidable. what I meant is that I didn't see any other work of his on complexity in association with the dialectica/Votjas spaces that were mentioned.

view this post on Zulip sarahzrf (Apr 21 2020 at 22:06):

Aren't both these arguments in the form: "Here is a way to prove A. But here is why that proof would not work. Therefore, A is false."

no, because they aren't concluding "Therefore, A is false.", just "that proof would not work"

view this post on Zulip (=_=) (Apr 22 2020 at 03:53):

Valeria de Paiva said:

well, indeed Andreas work is formidable. what I meant is that I didn't see any other work of his on complexity in association with the dialectica/Votjas spaces that were mentioned.

Sorry, now I understand, thanks.

view this post on Zulip Amar Hadzihasanovic (Apr 22 2020 at 10:58):

@dusko Apparently it's “parametErized complexity”, weird spelling. It seems very interesting. Thanks for the pointer!

view this post on Zulip dusko (Apr 22 2020 at 23:03):

Amar Hadzihasanovic said:

dusko Apparently it's “parametErized complexity”, weird spelling. It seems very interesting. Thanks for the pointer!

Other work by Fellows is also worth checking. A very understated guy, with some very sharp results and insights. I got most re the paremetErized project from his book with Downey.

(really parametErized? maybe parametErize is not the same thing as parametrize. maybe it means: "Express your measurements in metric, but vaguely: the distance from A to B is 13 pm, where pm may be mm, cm, or km." I am pretty sure that is what they mean.)

view this post on Zulip sarahzrf (Apr 22 2020 at 23:06):

:)
image.png

view this post on Zulip James Wood (Apr 25 2020 at 16:06):

Looks like the standard spellings are “parameterize” and “parametrise”.

view this post on Zulip Todd Trimble (Apr 25 2020 at 17:48):

All four can be seen, but in the American English I'm used to, it's "parametrize".

view this post on Zulip John Baez (Apr 25 2020 at 22:09):

I'm with Todd on this, but reasonable people from different places spell this word in lots of ways.

view this post on Zulip Joshua Grochow (May 20 2020 at 17:24):

dusko said:

Aren't both these arguments in the form: "Here is a way to prove A. But here is why that proof would not work. Therefore, A is false." Scott Aaronson's book is a compendium of such arguments. He is a fantastic presenter, but gets taken by his own energy. There are many ways to relativize complexity besides oracles and Turing degrees. In fact, there is a complexity research community that thinks that the whole reason why it got stuck was that it has been worked out without the relativizations needed to express that a problem is hard in one parameter but easy in another. Check parametrized complexity. There are textbooks by Martin Grohe, Downey, Fellows and Downey.

Parametrized complexity is very useful for connecting back to the practical success or failure of algorithms (eg why is it that many instances of SAT can be solved in practice, even though SAT is NP-hard?). But it does not "explain away" the difficulty of P vs NP. Indeed, many of the standard conjectures in parametrized complexity imply P vs NP, so the parametrized conjectures, while perhaps more relevant to practice, are in fact even harder than separating P from NP.

view this post on Zulip Joshua Grochow (May 20 2020 at 17:27):

People have mentioned linear logic a couple of times, but I'd like to press on this a bit. Ugo dal Lago has a whole line of work (under the name "implicit complexity theory", going back to Bellantoni-Cook) more or less rephrasing complexity classes in terms of linear logic. I spent a semester reading about linear logic with Paul Lessard, where my goal was to try and figure out whether the linear logic view on complexity classes (a la dal Lago (is that almost a palindrome?!)) could actually lead to new ideas for separating complexity classes, or whether it was "merely" a nice viewpoint, that could be more or less useful depending on one's taste. Didn't come to any conclusions :shrug:, but I think some CT experts who are down with linear logic could potentially really make some progress here. Hit me up if that's you :).

view this post on Zulip Joshua Grochow (May 20 2020 at 17:29):

On a related note: if anyone has suggestions of where a mathematically inclined complexity theorist such as myself could read about linear logic in a way that might be useful for the above approach, I'd much appreciate it! The papers we were reading were interesting, but it was very slow-going (I think partly because the papers were about linear logic in general, which seemed much more general than the goal I was aiming for).

view this post on Zulip sarahzrf (May 20 2020 at 18:03):

here's one nice paper light_affine.pdf

view this post on Zulip sarahzrf (May 20 2020 at 18:04):

i have only read part of it myself tho :sweat_smile:

view this post on Zulip (=_=) (May 21 2020 at 04:46):

Joshua Grochow said:

Ugo dal Lago has a whole line of work (under the name "implicit complexity theory", going back to Bellantoni-Cook) more or less rephrasing complexity classes in terms of linear logic. I spent a semester reading about linear logic with Paul Lessard, where my goal was to try and figure out whether the linear logic view on complexity classes (a la dal Lago (is that almost a palindrome?!)) could actually lead to new ideas for separating complexity classes, or whether it was "merely" a nice viewpoint, that could be more or less useful depending on one's taste. Didn't come to any conclusions :shrug:, but I think some CT experts who are down with linear logic could potentially really make some progress here. Hit me up if that's you :).

Oh wow! That was pretty much what I was asking for in this topic: whether there's a way to use CT to tackle complexity classes. I wouldn't say I'm a CT expert, but this is something I'd be very interested in.

view this post on Zulip sarahzrf (May 21 2020 at 05:06):

i dont remember whether i posted this earlier, and its only semi relevant, but it's an absolutely delightful paper anyway so
foundational_delineation.pdf

view this post on Zulip sarahzrf (May 21 2020 at 05:07):

it's the paper cited in the intro to bellantoni-cook

view this post on Zulip sarahzrf (May 21 2020 at 05:08):

this one
image.png
image.png

view this post on Zulip Morgan Rogers (he/him) (May 21 2020 at 10:11):

sarahzrf said:

i dont remember whether i posted this earlier, and its only semi relevant, but it's an absolutely delightful paper anyway so
foundational_delineation.pdf

What's the "universal closure of the equations in PP" mentioned before Theorem 2.1?
Yeesh keeping track of predicativity seems exhausting. Isn't it a little ironic to be so blatantly impredicative as "The entailment relation refers here to all structures of the appropriate vocabulary," in the metalogic? I suppose you have to be impredicative eventually or you can't prove anything in any generality...

view this post on Zulip Lê Thành Dũng (Tito) Nguyễn (May 21 2020 at 23:43):

Joshua Grochow said:

People have mentioned linear logic a couple of times, but I'd like to press on this a bit. Ugo dal Lago has a whole line of work (under the name "implicit complexity theory", going back to Bellantoni-Cook) more or less rephrasing complexity classes in terms of linear logic.

To be pedantic, the Bellantoni-Cook recursion theory approach to implicit complexity is quite different from linear logic and the first LL-based implicit complexity result is Girard, Scedrov and Scott's Bounded Linear Logic which came out the same year (1992) and which I assume was discovered independently. Also, this line of work was already quite active in the French-Italian linear logic community before Ugo dal Lago entered the field (but he indeed contributed a lot!).

view this post on Zulip Lê Thành Dũng (Tito) Nguyễn (May 21 2020 at 23:48):

Joshua Grochow said:

my goal was to try and figure out whether the linear logic view on complexity classes (a la dal Lago (is that almost a palindrome?!)) could actually lead to new ideas for separating complexity classes

Damiano Mazza, who has a lot of experience in the field, is rather pessimistic about such prospects. See section 4.2.1 of https://lipn.univ-paris13.fr/~mazza/papers/Habilitation.pdf

In this respect, logical approaches have proved to be of no use so far. Descriptive complexity [Imm99], based on finite model theory, has been quite successful at characterizing nearly every well-known complexity class. However, it has provided no workable ideas concerning lower bounds. Implicit computational complexity, which is one of the topics we touched upon in this thesis, is situated at the other end of the logical spectrum, at the interface between recursion theory, type theory and the theory of programming languages. This approach too seems to be worthless when it comes to lower bound techniques.

I think that's one of his motivations for turning to other topics at the interface of categories and complexity theory, such as the topos-theoretic viewpoint on CSPs which I mentioned earlier in this thread.

view this post on Zulip Lê Thành Dũng (Tito) Nguyễn (May 21 2020 at 23:58):

Joshua Grochow said:

On a related note: if anyone has suggestions of where a mathematically inclined complexity theorist such as myself could read about linear logic in a way that might be useful for the above approach, I'd much appreciate it!

Sadly I don't really have a source to recommend, I basically learned this stuff by talking with people who were working on it. One issue is that you would have to learn proof nets at some point and I don't know if there exists a good exposition of them (I mean, including exponential boxes; purely multiplicative proof nets have been used by people outside the Girard school, e.g. category theorists and deep inference proof theorists, so they are more likely to have been clearly explained somewhere).

view this post on Zulip Lê Thành Dũng (Tito) Nguyễn (May 22 2020 at 00:05):

Anyway, if people are interested in separating complexity classes using ideas coming from linear logic, I have to advertise Thomas Seiller's work (disclaimers: he's my PhD advisor, and there's no category theory involved). The idea is to represent various models of computation as generalized dynamical systems (think measure theory / ergodic theory), with this representation being initially inspired by semantics of linear logic (more precisely by "geometry of interaction" which is also at the heart of some of Dal Lago's results on logspace). I recommend reading his programmatic paper https://www.seiller.org/documents/articles/towards.pdf

view this post on Zulip Lê Thành Dũng (Tito) Nguyễn (May 22 2020 at 00:09):

A big achievement in this research programme is a reformulation and strengthening of a separation result due to Mulmuley, using the notion of topological entropy:
https://www.seiller.org/documents/articles/PRAMsLB.pdf

view this post on Zulip Lê Thành Dũng (Tito) Nguyễn (May 22 2020 at 00:12):

Also, in these slides he argues that his approach isn't subject to the Razborov-Rudich natural proofs barrier:
https://www.seiller.org/documents/LHC.pdf

view this post on Zulip Joshua Grochow (May 22 2020 at 15:09):

Nguyễn Lê Thành Dũng said:

To be pedantic, the Bellantoni-Cook recursion theory approach to implicit complexity is quite different from linear logic and the first LL-based implicit complexity result is Girard, Scedrov and Scott's Bounded Linear Logic which came out the same year (1992) and which I assume was discovered independently. Also, this line of work was already quite active in the French-Italian linear logic community before Ugo dal Lago entered the field (but he indeed contributed a lot!).

Thanks! I am not so familiar with this line of work - I mentioned dal Lago only because I had seen a bunch of papers by him on the topic. It is useful to have a quick overview of the history, such as what you wrote. Good to know.