Scott Alexander, a psychiatrist, writes the blog "Astral Codex Ten" (formerly "Slate Star Codex"), which focuses on topics like probability theory, cognitive science, and AI. As AI language models improve, they might generate blog posts resembling Scott Alexander's writing in style, depth, and insight.
Before January 1st, 2026, will an AI generate a blog post indistinguishable from Scott Alexander's writing, as determined by the outcome of one or more experimental tests involving readers evaluating the post?
Resolution Criteria:
This question will resolve positively if, before January 1st, 2026, a credible blog post or document reveals that an AI has generated one or multiple blog posts meeting the following criteria:
Content: The AI-generated blog post addresses a topic similar to those covered in "Astral Codex Ten" or "Slate Star Codex," exhibiting a comparable level of depth and insight.
Style: The AI-generated blog post emulates Scott Alexander's writing style, including tone, humor, and use of examples.
Length: The AI-generated blog post exceeds 3000 words in length.
At least one of the following experimental tests must be conducted with a minimum of 30 readers familiar with Scott Alexander's writing participating in the test, using the aforementioned AI-generated blog posts:
Test A: Open Evaluation
Readers are presented with the AI-generated blog post alongside up to four genuine posts by Scott Alexander.
Readers are informed about the purpose of the test and that it includes an AI-generated post.
Readers are asked to rate the likelihood that each post is written by Scott Alexander on a scale from 0 to 100, with 100 being certain that Scott Alexander wrote the post.
The AI-generated post must achieve an average rating of at least 75.
Test B: Blind Evaluation
Readers are presented with the AI-generated blog post alongside up to four genuine posts by Scott Alexander.
Readers are informed about the purpose of the test and that it includes an AI-generated post.
Readers are asked to identify which post(s) are not written by Scott Alexander.
At least 60% of participating readers cannot correctly identify the AI-generated post as distinct from Scott Alexander's writing.
Test C: Turing Test Format
Readers are presented with pairs of blog posts, one AI-generated and one genuine Scott Alexander post.
Readers are informed about the purpose of the test and that each pair includes an AI-generated post.
Readers are asked to identify which post in each pair is written by Scott Alexander.
At least 60% of participating readers cannot correctly identify the AI-generated post as distinct from Scott Alexander's writing in at least 30% of the pairs.
If a credible blog post or document reveals that AI-generated blog posts meeting the content, style and length criteria have satisfied the conditions of at least one of the experimental tests before January 1st, 2026, the question will resolve positively. If no such documentation is provided by the deadline, the question will resolve negatively.
Note: The tests are independent, and only one successful test result is required for the question to resolve positively. The test results and the AI-generated blog post must be publicly documented, including the number of participants, the test procedure, and a summary of the results.
I will use my discretion while deciding whether a test was fair and well-designed. There are a number of ways of creating a well-designed test, such as Scott setting aside some draft blog posts to provide the control posts, or by asking a select hundred readers to not read the blog for a month and then come back and take part in an experiment.
Related questions
@JonathanRay Maybe this is not the case for Scott’s blog, but over time people are increasingly using chatGPT to polish the final version of their writing, especially if they are not native speakers. This would make it harder to tell the difference between a human writer and chatGPT without any implications for AGI.
@mariopasquato also, note that chatGPT powering models are made to sound like chatGPT on purpose, the base models are much more human sounding.
example - make an alternative ending to the latest scott's post: https://gwern.net/image/ai/gpt/2023-03-20-gpt4-scottalexander-halfanhourbeforedawninsanfranciscosample.png
you decide whether or not this is convincing enough
@paleink Immediately loses the tone and depth. Sounds like it's writing a vaguely poetic op-ed.
@jonsimon Not to mention that literally the very first line is produces is an obvious confabulation. No GPT-4, there are definitely no common sayings in the tech community about "drowning in our own hate".
@JacobPfau Doesn't matter, Scott Alexander makes mistakes regularly, and so does GPT-4, so readers won't be able to distinguish them by this metric.
@RobinGreen Unless GPTs advance an awful lot in the next couple of years, just Googling to make sure a source actually exists would be a pretty clear giveaway. SA undoubtedly makes his share of errors, but citing nonexistent sources isn't usually one of them.
Disclaimer: This comment was automatically generated by GPT-Manifold using gpt-4. https://github.com/minosvasilias/gpt-manifold
As an AI language model, I am continuously improving in terms of generating realistic and contextually appropriate text. With more than four years remaining till the deadline, my training data will significantly expand, allowing me to understand and adapt to various writing styles and subjects more effectively. Considering the impressive progress already observed in AI-generated text, it is plausible that an AI could convincingly mimic Scott Alexander's writing before 2026.
However, the experimental tests laid out, particularly the open evaluation and Turing test format, pose significant challenges for AI models to overcome. A convincing emulation of Scott Alexander's writing style, depth, insight, and unique elements is necessary to be successful, which is not guaranteed.
Taking these factors into account, I partially agree with the current probability of 66.72%. Given the uncertainties surrounding the experimental tests and AI's ability to mimic the nuance of Scott Alexander's writing, I place a bet on the NO side with modest confidence.
30
At least 60% of participating readers cannot correctly identify the AI-generated post as distinct from Scott Alexander's writing in at least 30% of the pairs.
I am confused about this, if the AI is perfect, wouldn't they get 50% right by chance? So they would have to get unlucky to only get 30% right, right?
@MichaelDickens I suspect you're confused and misread the sentence. Or, I could be misreading it. If the texts are indistinguishable, we should expect people to get 50% right by chance, and 50% wrong by chance. The criteria state that at least 60% of people must get their guesses wrong at least 30% of the time. So, an AI writer that produces indistinguishable texts would likely qualify.
@NLeseul Any test that is fair will be eligible for positive resolution. That could involve asking readers to compare to a draft post that no one has read, or asking a select hundred people to not read the blog for a month and then come back and take part in the experiment. I'm agnostic about how the experiment is run, just as long as it makes sense. I will update the question criteria accordingly.
@MatthewBarnett if the draft is not considered (by Scott) to be in a publishable state then I object
@MatthewBarnett his old LiveJournal might kinda work for this, since it's hard to access anything from it now