Is the proof of P != NP by Ke Xu and Guangyan Zhou valid?
140
1.3kṀ350k
resolved Apr 15
Resolved
NO

https://arxiv.org/abs/2302.09512

Resolves once either of the following are true:

  • There's an easily accessible and broadly accepted consensus on the subject, such as Wikipedia providing an answer.

  • Scott Aaronson or some other well-known and respected computational complexity person states it is or isn't valid, and there doesn't seem to be significant disagreement from any other well-known and respected computational complexity people.

Get
Ṁ1,000
to start trading!

🏅 Top traders

#NameTotal profit
1Ṁ3,285
2Ṁ1,909
3Ṁ1,806
4Ṁ1,458
5Ṁ1,293
Sort by:

This paper has been published in Frontiers of Computer Science, 2025, 19(12): 1912405
DOI: https://doi.org/10.1007/s11704-025-50231-4


The appendices include an extended abstract of the paper and comments from seven experts.

There is a nice review on the paper: Computer Scientists Prove Some Logic Puzzles Have No Shortcuts
https://scienceblog.com/computer-scientists-prove-some-logic-puzzles-have-no-shortcuts/

@ZhiwenFang Thanks for linking this! However the fact that the review only mentions P vs NP a single time without commenting on what a breakthrough this result would be if confirmed makes me think that whoever wrote the review is not really an expert in this subject. (To be clear, just because the reviewer is unknowledgeable doesn't mean that the original paper is more likely to be wrong, rather it just doesn't add any additional evidence that it is right.)

@A I have a strong intuition that if we prove SAT requires exponential time to solve, P != NP would be a direct and obvious consequence. The true challenge and core contribution of such a proof, however, lies in establishing a lower bound demonstrating that specific SAT instances are inherently intractable without exhaustive search; P != NP would then simply be a corollary of this fundamental finding which doesn't need to be emphasized:)

@ZhiwenFang Yes, obviously if SAT requires exponential time to solve then P!=NP is a direct consequence:

  • SAT is in NP (because a satisfying solution can be verified in polynomial time)

  • If SAT requires exponential time to solve, then it cannot be solved in polynomial time, so SAT is not in P

  • If SAT is in NP but not in P, then P!=NP

The paper itself states this too ("Subsequently, we prove, by the diagonalization method, that constructed self-referential CSPs cannot be solved by non-brute-force computation, which is stronger than P ≠ NP.")

My point is that regardless of exactly how this paper proves P!=NP, that is a monumental result, so it makes no sense for a discussion of this paper to mention it only in passing and ignore all the previous literature on the difficulty of this problem. For the paper itself to treat it as an obvious corollary is fine (I guess, if they're trying to be modest maybe) but for someone independent to have written a review about the paper and not taken the time to mention "wow this proves P!=NP, that's a really hard thing to do that will revolutionize the field of complexity theory, and lots of people have tried before and failed, but we think this time the approach actually worked because [X]" seems bizarre.

@A It makes sense. And I have the same feeling that they're trying to be modest:)

@ZhiwenFang You might also call this modesty, but most people that have proven P != NP would submit it to the top journal in computer science (or mathematics), not in some (internationally) obscure Chinese journal where one of the co-authors is deputy editor for that journal.

@A Actually, as for why the authors did not emphasize the P vs. NP problem, we might find the answer in the appendix of the paper. The first paragraph of 'Appendix A' clearly states that the P vs. NP problem itself may be the biggest obstacle, hence the necessity to redefine the foundational problem of computer science. The second paragraph further explains why no non-trivial results have been achieved in the study of complexity lower bounds before — it is because the initial research path was wrong from the start. This is undoubtedly a harsh reality for scholars engaged in complexity theory research. I speculate that this may be the reason why the authors did not put much emphasis on the P vs. NP problem.

@FlorisvanDoorn The core value of this paper lies in its redefinition of the foundational problem in computer science based on Gödel's framework — specifically, the distinction between non-brute-force computation and brute-force computation. In contrast, the P vs. NP problem, which has long been a focus of academic research, is not a foundational issue in this field. Gaining widespread recognition for such a groundbreaking breakthrough in a short period is by no means an easy task. As mentioned in the appendix of the paper, the authors have shared their research findings with hundreds of experts worldwide, and more than 30 of them — including the founder of algorithmic information theory — have highly praised the work. From the perspective of academic history, many landmark achievements were not published in top journals. Therefore, the value of a research outcome cannot be judged solely by the rank of the journal in which it is published; instead, it must undergo professional evaluation based on the content of the achievement itself. It is worth noting that this journal has specifically published signed comments from seven experts, a practice that is quite meaningful — it precisely embodies the principle that the evaluation of academic achievements should be open and transparent.

@ZhiwenFang It's like if someone claimed to invent a spaceship that could go 1 billion miles per hour, and focused all their discussion on the details of the engine, but never discussed the fact that this is faster than the speed of light out of "modesty"...

(not a perfect analogy because these are "seemingly impossible" achievements in slightly different ways but you get the idea)

@A Aha, I like the analogy, but my interpretation is more like someone claims that to build a spaceship that could go 1 billion miles per hour is theoretically impossible before anyone knows/accepts the fact that the speed of light is the limit of any spaceship:)

A final comment on this market.

Every judgement should be based on knowledge not ignorance.

@DZ1990 FYI
This paper has been published in Frontiers of Computer Science, 2025, 19(12): 1912405
DOI: https://doi.org/10.1007/s11704-025-50231-4

A comment on Xu and Zhou's paper.

Self-reference (not NP-completeness) is the key to understanding hardness.

Out with the boring old market, in with a shiny new market! https://manifold.markets/Tasty_Y/will-the-alleged-proof-of-pnp-by-ke

It might even resolve someday.

Alright, several people have pointed out several different flaws in the paper, and as far as I can tell nobody has even attempted to refute them. This market is one of the top Google results for "Ke Xu/Guangyan Zhou proof of P vs. NP", so I guess I'll consider these comments a "consensus". A little borderline, but I'm satisfied that if this paper had miraculously been correct, it would have been harder for people to find purported errors, and somebody would have provided a better defense of those errors.

@IsaacKing

Obviously, there is no consensus. I think that this paper will finally be proved valid. My judgement comes from the following points.

1. This paper established a connection with Hilbert's problem and Gödel's method. This connection is reasonable.

2. The essence of P vs. NP is proving lower bounds and there has been little progress in this direction in the past half-century because of various barriers. This paper proposed a constructive method that can avoid all known barriers in proving lower bounds.

3. Some other researchers have also proposed using the constructive method to prove lower bounds.

4. This paper clearly explained why previous attempts have failed and why their approach can work.

@DZ1990 Even if all of those points are true, they're not relevant to this market. This market is about whether the specific proof in the paper is valid, and there appear to be multiple flaws in it.

@IsaacKing

I do not think that these flaws are real flaws. They are just misunderstandings about the proof. Time will prove everything.

@DZ1990 Sure. Once the mainstream mathematical establishment realizes what they've missed, I'll re-resolve the market and send you an extra M$10,000 as an apology.

@IsaacKing

Thanks but M$10,000 is not enough because I have lost much more than that.

@DZ1990 If the proof is validated and he re-resolved the market to yes you'd make back the full M$200,000 your shares were worth.

@Joshua Thanks, and let's see .........

@DZ1990 I think you should notice that, despite claiming there's no consensus, none of your points refer to a single independent mathematician, let alone complexity theorist, who thinks the paper is worth anything.

@jacksonpolack

I know something about proving lower bounds which is exactly the subject of this paper. So I can judge the validity of this paper based on my own knowledge. I also heard something about this paper from TCS

and math experts. As I said before, time will prove everything and everything will become history.

© Manifold Markets, Inc.TermsPrivacy