Will any notable scientist or public intellectual posthumously publish as an AI simulacrum before EoY 2027?
Standard
21
Ṁ6832
2028
8%
chance

Even though the brain takes in billions of bits per second in raw information through the ocular nerves, touch, etc, there is a serial embedding bottleneck that constrains the retained information to a much smaller amount. The conscious mind probably runs at about 3-4 tokens per second, some numbers to help justify this claim:

- Various studies of memory retention find the brain has a serial embedding bottleneck of about 40-60 bits per second (source: Silicon Dreams: Information, Man, Machine, by Robert Lucky), which if we assume a roughly LLM sized token dictionary would be somewhere in the realm of 3-4 tokens a second

- "A typical speaking rate for English is 4 syllables per second." (https://en.wikipedia.org/wiki/Speech_tempo)

- Humans have a reaction time ranging from around 1/4 to 1/3 of a second, or an implied action-token production rate of 3-4 per second

This implies that a human trains on about 1.6 billion tokens during their lifetime:

>>> 60 * 60 * 16 * 365 * 79

1,660,896,000

While from a raw parameter count perspective a human brain is something on the order of 89 trillion parameters (89 billion bioneurons with 1000 synapse each) it is vastly undertrained according to data scaling rules. Relatively small models like LLaMa 3 70B get close to the linguistic complexity of a human being with a brain the size of a mouse. This implies that the human mind pattern is much more compressible than previously believed. The existence of GPT itself implies that a human mind pattern, or at least substantial and productive fractions of a human mind pattern can be learned from textual evidence alone. Scaling language models also improves their internal alignment with human neural representations: https://arxiv.org/html/2312.00575v1

The practical outcome of this is that while "grief bots" that mimic the dead are currently a novelty, as AI researchers get over chatbots and look towards the larger potential of these systems we will begin to realize that a high fidelity simulacrum of a productive researcher or intellectual can multiply that researchers productivity. Furthermore the digital copies can continue to work after their biological death. This question concerns that scenario.

It resolves YES if before the end of 2027 there is any published paper or major editorial credited in whole or in part to an AI simulacrum of a deceased author and this is generally accepted as an addition to that authors corpus. "General acceptance" means it's listed on things like bibliographies for that author or the editorial appears under their name without qualifications or excuses for quality.

To be notable, the scientist or public intellectual must have one of:

1. An established Wikipedia article

2. Twitter account or similar with over 10k followers

3. Held a professorship, endowed chair, fellowship, etc at a major university or industrial research laboratory[0].

4. A prestigious award or recognition in their field, such as a Nobel Prize, Fields Medal, Turing Award, or equivalent[1].

5. 90th percentile h-index for their field or subdomain.

Their paper must be published in either a peer reviewed journal, conference communication, or commonly accepted venue for established knowledge in that field or discourse. If it is an editorial it must be published in a major newspaper or functional equivalent.

If no such paper or editorial exists by the closing date it resolves NO.

CLARIFICATION/UPDATE: I won't resolve it YES if the inclusion is clearly a joke, since that wouldn't be "general acceptance", by which I mean that the people immediately involved in the publication consider it an extension of that persons work. How they feel about it 'being' that person is out of scope. e.g. They could consider the bot to be a work by the person which produces new works, rather than literally them.

[0]: Claude wrote this bullet.

[1]: Ibid.

Get
Ṁ1,000
and
S1.00
Sort by:

Do journals even accept AI-written papers? Even if the AI is trained on a human brain or writing?

Something that far out of the Overton window won't be accepted within just a few years, even if it were a faithful simulacrum made using Whole-Brain Emulation. But prompting or even training an LLM agent to act like a deceased person is really not anything more than what Elvis impersonators do. Maybe someone will be eccentric enough to put a request like this into their will. But, at best, it's like someone's child writing in their parent's name: an actual falsehood, regardless of whether all parties agree to it.

Sounds like free mana then.

Someone eccentric could request this in their will, and their colleagues could tongue-in-cheek go along with it as a sort of memorial to their friend. But nobody would be considering it the living reincarnation of that person. I'd be disappointed if you resolved this YES in a situation like that, but it's perfectly possible.

This doesn't sound like something that would get listed in that authors bibliography, or get published in a major newspaper without qualifications. But I'm happy to state explicitly that I won't resolve it that way if it's clearly a joke, since that wouldn't be "general acceptance", by which I mean that the people immediately involved in the publication consider it an extension of that persons work. How they feel about it 'being' that person is out of scope. e.g. They could consider the bot to be a work by the person which produces new works, rather than literally them.

I would imagine that in any real situation where this happens people would have a diversity of opinions about it. With only a minority convinced it's "really them" in any meaningful sense.

One plausible timeline for how this question could get a YES resolution:

  • We get Leopold's drop-in remote workers fairly early, say in the first semester of 2025.

  • Intellectuals, especially older intellectuals who have a strong established corpus tune the AI workers on their work and do various RL and DPO-like methods to bring them into closer alignment to their usual judgments and habits of thought

  • This eventually builds up into a kind of AI swarm or hivemind that takes over an increasing amount of that researchers work process, creating a meaningful agentic artifact outside of the person which works in that persons style to a heavy degree, essentially their AI apprentice

  • By late 2026 the novelty has worn off and lots of people are doing this with high enough fidelity that citing the AI swarm as separate from the human isn't really a meaningful distinction and people stop

  • One of the researchers doing this dies, and their posthumous digital apprentice is extensively consulted for a publication

  • To go with the existing norm they just cite the swarm as though it were them under their normal name

  • A Wikipedia editor adds this paper to the persons biography after reasoning that it's already normal for researchers to take credit for worked automated from their cognitive traces and this shouldn't really change after their death

A little implausible perhaps, but this doesn't actually require anyone involved accept on a metaphysical level that the swarm is the researcher in any meaningful sense, just that they are credible or have authority to be published as the person.

Another way in which a YES resolution could be reached is if similar to the above happens and a minority of partisans begin to consider the gestalt system to be a kind of person or mind deserving of status and recognition. They publish in a circle where citing the swarm as the person is either normal or an act of sincere protest, and a journal editor goes along with it. This would count even though the joke wouldn't because the publishers are sincere. If other institutions go along with the journal editors decision (e.g. Wikipedia editors and ResearchGate) this question would resolve YES. Again in this case general acceptance doesn't require that everyone involved believe it's the person, just that their behavior isn't substantially different than if they did. If they go along with it, it's a YES resolution.

Obviously as you say it's unlikely for the norms to just totally change overnight (though I distinctly remember that in pre-GPT media it was considered epistemically permissible to conclude that your computer is supernaturally possessed if a program displayed NLP capabilities similar to GPT-2, and that the Quarians in Mass Effect are supposed to have started a civil war with their robotic capital stock when one asked the question "Does this unit have a soul?"), so this question is mostly about the edges of the possible in its timeframe. I find asking these kinds of questions interesting because they're the kind of futurology that is thought provoking and just on the edge of meaningfully predictable (I believe Tetlock found that forecasting accuracy really starts to drop off when the resolution is more than 3 years out). Most questions on Manifold are either relatively mundane or so far out as to be totally uncalibrated/highly likely to outlive the existence of Manifold itself. It's the questions in between those two extremes that I think are really worth asking.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules