Tokyo-based research lab sakana.ai has recently (as of August 2024) released an AI research agent which they claim is able to automate the entire ML research lifecycle - proposing experiments, writing code, running that code and collecting results. (https://x.com/SakanaAILabs/status/1823178623513239992)
At least in theory, this means that we have entered the era where an AI could plausibly publish a paper all by itself, without any coauthors.
This market resolves YES if, by the end of 2025, a scientific paper has been published in a peer-reviewed scientific journal with an AI (whether sakana's or not) as its sole author and that paper has received at least 100 citations in other papers published in peer-reviewed journals.
The paper does not need to be AI-related. Citation counts will be checked on OpenAlex: https://openalex.org
Yes, that's a possibility I was getting at in my most recent comment.
It's also possible that it will be non-fraudulently submitted to a small, non-scam journal. There are lots of small journal with lax rules in e.g. philosophy.
Or a major journal could team up with the AI lab and even have an entire issue published of AI-written articles.
IDK, never paid that much attention and I forgotten by now. But u come across lots of them. Head of philosophy department where i took courses founded one for example. Very small, but also not scams. Anyway, I doubt the article that resolves this market will be from a small journal, it will probably be either an AI-not-revealed-to-be-AI, or a partnership between a lab and a major journal.
And also - clarifying what you take as sources of citation data is important. E.g. Google Scholar is relatively easier to game/manipulate and provides noticeable higher total citation counts than say Scopus/WoS. Among open sources OpenAlex is pretty decent in my recent experience (and also gives substantially lower citation counts than Google Scholar)
It might be good to bear in mind that a) the produced papers are trash and b) most (if not all) non-predatory publishers have explicit policies against AI authorship of papers (e.g. Springer: "Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria (imprint editorial policy link). Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs." https://www.springer.com/gp/editorial-policies/artificial-intelligence--ai-/25428500)