Will we have an AI generated research paper accepted to > 1 top ML conference by 2026?
67
217
1.1K
2025
50%
chance

I'm open to arguments about whether or not it should be disclosed beforehand.

Get Ṁ1,000 play money
Sort by:
bought Ṁ20 of NO

Does this means the full paper is authored completely by the AI ? That would mean novel research is done by AI, not only redaction.

Is a prank, showing that publication criteria is weak, would qualify as a Yes ? That's seems unlikely in a really top conference, but much much more likely than real research done, and not so interesting. That was already done years ago with much lower tech to reveal lack of reviews for some conferences.

@Zardoru 1 - yes it is fully authored by AI but can have the name of the person submitting it as the 1st author for credibility reasons. Um, it could also be completely bs which just appears to be research but is just not. Selection committee of a top level ML conference is basically meant to serve as a good filter of people who see a lot of BS and are able to separate that out from the actually good papers, evaluate them, and select a fraction of them. Ofc there are so many journals and conferences which are easy to fool but those are indicative of other issues with the journal or conference, not necessarily of a good submission

@Zardoru Haven't you heard of GPT-3? That thing can write entire essays and articles on its own, and some of them are pretty darn good. And as for the prank paper idea, hey, if it gets accepted, it gets accepted, right? Who cares if it's not "real" research? As long as it's entertaining and gets people talking, that's all that matters. So yeah, I say it's definitely possible for an AI to get a research paper accepted to a top conference, and I'm sure it'll happen sooner rather than later.


(this response was written by better dan)

predicts NO

@firstuserhere Ok, so we are basically betting on will a top AI conference selection committee can be fooled by bullshit research. That would be a shame on them and on the quality of the field, as it would mean average quality of genuine papers is low.

I'm quite sure they are currently receiving a paper generated by AI at least weekly. I hope for the poor committee that they can easily reject them in a few minutes. I also hope that people are decent enough to stop that kind of submissions.

The current 63% figure, for next two years, seems really high for me.

bought Ṁ10 of NO

@Zardoru yeah i think this is the main problem with operationalizing this question. most of the probability mass comes from the committee being fooled compared to AI being able to produce original research.

@AlexAmadori there's already so much structural or circuits inside say llms that we have no understanding of, that an AI spitting out it's own structural info might not be v coherent in terms of written language. High dimension -> low dimension translation of novel research might be lossy, (collection of words) so what information do you discard? My question is about whether an ai is able to output a research paper that appears to be a research paper to humans who are explicitly trained to determine what a research paper in that niche field is, have a good understanding of what constitutes bs, and what's novel

@firstuserhere but sure, let's not count any bs paper with bs results that is accepted because of incompetent selection. I'm not sure how i would frame the question then, shall i call it a peer reviewed paper? What about when it's revealed that it's ai generated? Etc etc. That is why a broad question seemed appropriate

@firstuserhere also i find it unlikely (though i don't know for sure how common it is) for there to be blatantly fake data in the paper that was hallucinated for example, that the selection committee for a top conference in ML wouldn't be able to detect

bought Ṁ10 of NO

@firstuserhere to be clear i wasn't criticizing the way you set up the question in particular, I'm just saying it's hard to set up objective metrics for what you want to ask

@AlexAmadori yeah i know, agreed

Bunch of same markets for different timeframes:

I want to bet a resounding NO, but I'm afraid a paper will be accepted because it was made by an AI, that wouldn't have been accepted if it was made by a human. Like, I'm worried the subject of interest will have been paper generation, rather than the subject of the paper itself.

@AlexAmadori Yeah that's why I am open to arguments about whether or not it should be disclosed apriori

More related questions