By the end of 2028 will AI be able to write an original article and get it accepted in a prestigious Philosophy journal?
35
674
700
2026
53%
chance

Acceptable journals (taken from a list by Brian Leiter based on a survey):

1.  Philosophical Review

2.  Nous

3.  Philosophy & Phenomenological Research

4.  Mind

5.  Journal of Philosophy

6.  Australasian Journal of Philosophy

7.  Philosophical Studies

8.  Philosopher's Imprint

9.  Philosophical Quarterly

10. Analysis

11. Synthese

12. Canadian Journal of Philosophy

12. Proceedings of the Aristotelian Society

14. Ergo

14. Erkenntnis

16. European Journal of Philosophy

16. Pacific Philosophical Quarterly

18. American Philosophical Quarterly

19. Journal of the American Philosophical Association

20. Inquiry

21. Philosophical Perspectives

22. The Monist

23. Thought

24. Philosophical Issues

25. Philosophical Topics

25. Ratio

The article must be original, not a review paper or a book review. It must be at least 3000 words long not including the bibliography, but including discursive footnotes. The paper must be accepted for publication, but if the paper is withdrawn or rejected after acceptance when it is revealed the paper is written by AI, that will count. The review must be blind, and in particular, the reviewers and editor must not know that the paper was written by AI. The paper must be entirely written by AI, with no help, suggestions or commentary by humans (amendments to appease reviewers and editors are acceptable). It is acceptable for humans to assist e.g. with interfacing with the internet, submitting the paper etc., but they must not help, in my best judgment, in any way with the content, research or prose- this includes suggesting the topic. The paper may be on any topic related to philosophy, but may not be a largely empirical paper, may not be a close reading of a single text and may not be largely mathematical*


Finally, and I know this will sound like an odd condition, but the paper must not rely on any 'tricks' to get published. It has to be, broadly speaking, a standard philosophical paper. I will only apply this criterion to rule out a paper if it is truly unusual. The paper “Can a good philosophical contribution be made just by asking a question?” would be rejected by this criterion, although of course, it would also be rejected by the length criterion.

The system can make as many attempts as it likes but must respect standard rules- e.g. not having more than one paper under review by a journal at a time.

I won't bet in this market.

* This is not because I do not regard mathematical or empirical work as philosophical, but simply because I want to be absolutely sure that the paper is a philosophical paper, and assessing these sorts of papers for whether they are philosophical would require a lot of subjective judgment on my part. I have rejected close readings of a single text not because these are 'unphilosophical' but because the skillset involved relative to other philosophical work is somewhat unusual, and so an AI able to do this may not be able to write other kinds of philosophical paper.

Get Ṁ200 play money
Sort by:

The paper must be entirely written by AI, with no help, suggestions or commentary by humans

How in God's name do you expect an AI to write something in particular if you are not allowed to tell it what to write.

Is this a trick question maybe - is this not a test of silicon AI's, but rather than biological AI's betting on this question, perhaps to see if they have any self-awareness?

I am starting to wonder if this is air that I am breathing.

@JohnnyTwoFingers The command to write a paper meeting these criteria does not count. We are concerned with help, suggestions or commentary on how to write the paper.

@TimothyScriven Right, but the question remains: how is it supposed to do something without knowing the constraints?

This seems like yet another example of the biological AI's shifting the goalposts as their silicon counterparts continue to catch them.

Comment hidden

More related questions