Is the proof of P != NP by Ke Xu and Guangyan Zhou valid?
140
9.3K
1.3K
resolved Apr 15
Resolved
NO

https://arxiv.org/abs/2302.09512

Resolves once either of the following are true:

  • There's an easily accessible and broadly accepted consensus on the subject, such as Wikipedia providing an answer.

  • Scott Aaronson or some other well-known and respected computational complexity person states it is or isn't valid, and there doesn't seem to be significant disagreement from any other well-known and respected computational complexity people.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ3,285
2Ṁ1,909
3Ṁ1,806
4Ṁ1,458
5Ṁ1,293
Sort by:

A final comment on this market.

Every judgement should be based on knowledge not ignorance.

A comment on Xu and Zhou's paper.

Self-reference (not NP-completeness) is the key to understanding hardness.

Out with the boring old market, in with a shiny new market! https://manifold.markets/Tasty_Y/will-the-alleged-proof-of-pnp-by-ke

It might even resolve someday.

Alright, several people have pointed out several different flaws in the paper, and as far as I can tell nobody has even attempted to refute them. This market is one of the top Google results for "Ke Xu/Guangyan Zhou proof of P vs. NP", so I guess I'll consider these comments a "consensus". A little borderline, but I'm satisfied that if this paper had miraculously been correct, it would have been harder for people to find purported errors, and somebody would have provided a better defense of those errors.

@IsaacKing

Obviously, there is no consensus. I think that this paper will finally be proved valid. My judgement comes from the following points.

1. This paper established a connection with Hilbert's problem and Gödel's method. This connection is reasonable.

2. The essence of P vs. NP is proving lower bounds and there has been little progress in this direction in the past half-century because of various barriers. This paper proposed a constructive method that can avoid all known barriers in proving lower bounds.

3. Some other researchers have also proposed using the constructive method to prove lower bounds.

4. This paper clearly explained why previous attempts have failed and why their approach can work.

@DZ1990 Even if all of those points are true, they're not relevant to this market. This market is about whether the specific proof in the paper is valid, and there appear to be multiple flaws in it.

@IsaacKing

I do not think that these flaws are real flaws. They are just misunderstandings about the proof. Time will prove everything.

@DZ1990 Sure. Once the mainstream mathematical establishment realizes what they've missed, I'll re-resolve the market and send you an extra M$10,000 as an apology.

@IsaacKing

Thanks but M$10,000 is not enough because I have lost much more than that.

@DZ1990 If the proof is validated and he re-resolved the market to yes you'd make back the full M$200,000 your shares were worth.

@Joshua Thanks, and let's see .........

@DZ1990 I think you should notice that, despite claiming there's no consensus, none of your points refer to a single independent mathematician, let alone complexity theorist, who thinks the paper is worth anything.

@jacksonpolack

I know something about proving lower bounds which is exactly the subject of this paper. So I can judge the validity of this paper based on my own knowledge. I also heard something about this paper from TCS

and math experts. As I said before, time will prove everything and everything will become history.

  1. I distrust the paper at least since equation 2.5. It assumes that probabilities that an assignment fails at different constraints are independent, and that seems off for me. Maybe they should've added something that constraints are generated arbitrarily.

  2. Proof of theorem 2.5 seems invalid.
    > Let _I be the set of RB instances with n variables and rn ln d constraints, where each
    instance either has a unique solution or has no solution. Assume that I ∈ _I has exactly one solution σ, which happens with probability at least 1/6
    from Corollary 2.3.
    But said Corollary 2.3 works on the set of all RB instances, not of ones filtered from having multiple solutions!
    Well, if you filter something out of set then remaining things have higher probability. OK.

  3. Theorem 3.2 assumes that the same set of "subproblems" will be tested after changing the constraints. It fails to account for cases when something like hash of all constraints is used to attempt finding the solution.

@AnT First, I think you are right about the independent generation of constraints. This is a usual requirement in the study of random CSPs.

Second, considering that there are infinite instances where there is either one solution or no solution, and the ratio between the two is linear, what you're concerned about shouldn't be an issue.

Third, I think Xu and Zhou's emphasis lies in the fact that solving a certain number of subproblems alone does not suffice to ascertain whether an instance is satisfiable. Consequently, the specific algorithms employed become inconsequential.

Question to everyone who wants me to resolve this NO: Do you believe that this paper has been subjected to sufficient scrutiny that, if it were a legitimate proof, it would have been caught?

It seems very plausible to me that, if a real proof were to somehow be discovered by a nobody, it would simply be dismissed on priors among the flood of invalid proofs. I only want to resolve this market if I'm confident that that is not what happened here.

If I were a nobody who had resolved P vs NP, I'd visit the nearest computer science department, and offer $100 and free beer to the first grad student who found a bug. Once a critical mass of grad students fail to find the bug, their advisors would try to help them find the bug, and interest in the proof would percolate. Or maybe I'd do something similar on Twitter. I interpret the fact that the authors haven't done this as Bayesian evidence that the proof is likely to be wrong (on top of the fact that I've personally looked at it and it's wrong).

@gregrosent I certainly agree with that. But it's always possible that the authors don't realize the significance of their discovery, or couldn't think of any way to get it noticed by the greater community, or don't care. (Consider what happened with the fast inverse square root algorithm.) So I'd like this market to be resolved based on actual inspection of the information in the paper, not circumstantial evidence.

@IsaacKing Yes I think one problem (lemma 3.1) has been discovered

@jacksonpolack Are we confident that's an actual error? Is the assumption false?

@jacksonpolack @IsaacKing

As I mentioned before, the innovation of Xu and Zhou's paper is establishing a connection with Hilbert's problem and Gödel's method. From this perspective, the most important contribution of this paper is the method of constructing extremely hard examples (or bad inputs). This is also very similar to Gödel's construction method because they both involve the self-referential property. Lemma 3.1 is proved based on an assumption. In my opinion, this is not a problem because Gödel's proof is also based on assumptions. The problem is whether this assmption is acceptable or can be replaced by a better one. Obviously, this will not affect the validity of this paper.

@gregrosent

I think that many complexity theorists have read this paper. I agree with what @jielinj commented on Xu and Zhou's paper two months ago: "However, the terrible thing is that this paper does not build on any previous work of complexity theory in the last 50 years. Therefore, if this paper is valid, it means that complexity theory has been a waste of time. I think that this might be the reason why no respected complexity theorist comes out to comment on this paper".

@gregrosent

Besides the ZDNET website, the website of CACM also mentioned the AI-generated proof of Xu and Zhou's paper last year. So this paper must have been read by many CS people.

https://cacmb4.acm.org/careers/276930-can-generative-ai-solve-computer-sciences-greatest-unsolved-problem/fulltext

The proof was also mentioned in twitter with quite a lot of views.

https://twitter.com/_akhaliq/status/1701763296460697805?s=46&t=12SnLJISnrM2vWArk4LfeA

https://twitter.com/MikePFrank/status/1714638917482467478

https://twitter.com/SametOymac/status/1701966194289324345

So, I'm not a complexity theorist. But I've read a decent amount of writing by people who know what they are doing, in pure math and other areas, and a decent amount of writing by people who do not know what they are doing. And - and this is just an outside view, independent of what's being said - David's writing drips with, practically screams 'doesn't know what he's doing'. I put a probability of ~0.1% that David's engaging with this subject in a way that's going to create mathematical comprehension, or that can adequately judge the correctness of proofs of the most difficult outstanding problems in pure math.

Lemma 3.1 is proved based on an assumption. In my opinion, this is not a problem because Gödel's proof is also based on assumptions. The problem is whether this assmption is acceptable or can be replaced by a better one. Obviously, this will not affect the validity of this paper.

This is deeply untrue. "Gödel's incompleteness theorem", along with other theorems, has two kinds of 'assumptions', depending on how you interpret the term. One kind is the basic axioms of mathematics required to prove it, for instance ZFC or PA. The other kind is the assumptions within the thing being proven - if you prove 'for all even integers i > 2, i is not prime', then 'i is even' is an assumption. The kind of assumption in lemma 3.1 is neither of those, relative to the question of 'P != NP', it's an extra thing you need to add. So what this really proves (although I think it has more errors) is axioms + (assumption for lemma 3.1) implies P != NP, and that's not what this question is about. So if we take your statement there seriously, then this paper does not, and cannot in its current form, resolve the question of P != NP, and this should resolve no. (I'm not actually sure if the paper authors would agree with that statement).

bought Ṁ250 NO at 5%
bought Ṁ10 YES

@jacksonpolack

The authors of the paper (https://arxiv.org/abs/2401.01193), i.e. Dong, Zhou and Xu, have clearly explained in Section 3.4 why an assumption is necessary for proving lower bounds.

"The form of algorithms is defined by Turing machines, while the essence of algorithms is how to use all possible mathematics to solve problems efficiently. Essentially, proving unconditional lower bounds for a given problem (including proving P != NP) is using mathematics to question mathematics itself. This is a self-referential question independent of the axiomsof mathematics. The fundamental reason is that self-referential statements about mathematical impossibility cannot be proved unconditionally, and even has no partial result. For example, consistency (i.e., impossibility of contradiction) is the most basic requirement for mathematics itself. However, as implied by Gödel's second incompleteness theorem, mathematics, no matter how powerful it is, cannot prove its own consistency. This remarkable theorem, without any partial result, is established on underlying assumptions about the essence of formal proofs."

I think if you want to establish the ........ independence of P=NP from ZFC or some other axiom system you'd want to do that with more detail than a single paragraph of prose.

Like that paragraph is "trisecting" tier nonsense

@jacksonpolack

This explanation looks reasonable to me. I do not know if the authors will establish this in a formal way.

Indeed, there has been no news about what top math experts think of this proof. However, the webpage (https://rjlipton.wpcomstaging.com/2023/10/14/possible-impossibilities-and-impossible-possibilities/) mentioned a news.

SCIENCE: Has {P\neq NP} finally been solved? The claim has now been checked by several top math experts and there is some hope that it is correct. The proof is by a clever insight…

Although the author of the blog said that it is a fake news, I think that this news cannot be out of nowhere.