When will brain upload be demonstrated for an animal?
35
261
1.5K
2026
1.9%
Before 2026
5%
Before 2030
15%
Before 2034
30%
Before 2038
41%
Before 2042
50%
Before 2046
55%
Before 2050

For this question to resolve positively, the following has to be done:

  1. An individual animal has to be conditioned (trained) in some way so that its behavior is in some testable way different from that of its peers.

  2. Its brain is scanned (possibly destructively).

  3. The scanned brain is simulated on a computer.

  4. The simulated brain exhibits the behavior that the original animal learned in step 1.

The animal can be however primitive, the only restriction being its ability to be trained.

The result has to be published in a peer reviewed journal. If by the time this happens, the academic culture changes enough to accept some other type of validation, it will also be acceptable.

I will not bet on this question.

Get Ṁ200 play money
Sort by:

I'm expecting ASI to be developed before C. elegans is emulated, simply because of research incentives.

It feels too me someone without a clue about the mechanics like this would be like trying to clone the woolly mammoth if elephants were also extinct. Or emulating an old game where there no longer exists any hardware to base the emulator on. Except it's a exponentiated problem that sits in the uniqueness of every individual complex (i.e. mammals) brain, that the neurons are different enough in every individual that the simulator would need to grow alongside the subject during every moment of its life/development. The thought is hazy enough that it could be completely ignorant, but I'm just trying to describe a feeling that brain upload is a concept like fiction, with some fundamental problem with the idea, based in the diversity of brains themselves, the simulator itself would be the problem, beyond the technical challenge of compositing slivers of nanometre thick slices of brain.

@VAPOR So you think that it's a difficult problem. I think everyone agrees.

I described a specific empirical setup that has to be fulfilled, no need to solve the problem for all possible brains.

And I would be cautious calling something "fiction". If you described ChatGPT to somebody a few years ago, they would tell you that it's a fiction that wouldn't be realized in this century.

@OlegEterevsky I've seen these Singularity type concepts discussed over the years, and AI which could pass the turing test or be smarter than humans was never a fiction like goal, just a matter of time, but brain uploading has always seemed like fiction to me. Like most cryogenics and immortality definitions, except harder. Star Trek stuff.

@VAPOR Here's a Metaculus question from 2020. The median prediction for "weak AGI" was initially >2050 and at some point went past 2100.

Here Scott Aaronson of all people in 2009 says that it will take "thousands of years" to develop general AI: https://twitter.com/RokoMijic/status/1734357777227645434

Ten years ago general-ish AI was no less "Star Trek stuff" then brain scanning now.

@VAPOR Also, I'd like to point out that OpenWorm would resolve this market to YES if they manage to train a live worm (which is not totally impossible), and then scan its 302 neurons and however many synapses.

@OlegEterevsky big difference between simulating C.Elegans and simulating a human.

Also, AI stuff now was a case of discovering transformers making connectionism deliver far beyond the earlier assumptions that AGI would need symbolic AI, the engineering of simulating an actual animals brain isn't a discovery of a math solution (?), its much more.

@VAPOR This question has "animal" in the title.

It is possible that we need a CRISPR-like discovery that would enable us to read the state of synaptic connections.

@OlegEterevsky I think I used the word mammal earlier. I'm not sure what worms are. I guess worms do prove concept but would only scale so far up the tree of life, like how deep learning alone may never get to the original conception of AGI.

Interesting chat by the way

When will people learn to differentiate between sci-fi and reality? Uploading brains maked no sense because your brain doesn't sit in a jar. The vast majority of our brain is used for unconcious and subconcious bodily functions. Even the small part where concious thought occurs is heavily influenced by external stimuli, as as well as internal state such as hormones and various signals from the nervous system. And that's not even getting to the potentially quantum nature of the brain.

I guess you could simulate an entire organism, but you would need to simulate to the level of molecular biochemistry.

Long term markets don't work and many Manifold users are delusional about the long-term future.

@Shump Simulating hormones and other external stimuli should be much easier that simulating the brain state itself, since they have much lower dimension. For a hormone it’s just its density in the bloodstream in a particular part of the brain. External senses also have very small bandwidth, compared to billions of neurons and trillions of connections.

I haven’t seen any evidence that you’d need to simulate the whole body at a molecular level to produce brain functions. It is theoretically possible that neurons themselves have complex structure that has to simulated, but they are taking to each other via effectively one-bit signals, so it seems unlikely.

I’m super skeptical regarding quantum brains. For once, we don’t know of any advantages that quantum computing could bring for what brain does. Also, we do not know of any way to preserve coherent quantum state at room temperature.

@OlegEterevsky "[neurons] are taking to each other via effectively one-bit signals" is probably just the delusional oversimplification @Shump is alluding to. We know that the timing of the spikes carries information, so any model of the neuron has to at least allow for spike trains and refractory periods. Such model thus has to have at least the complexity of the Hodgkin–Huxley model: https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley_model presumably noticeably more: at some point a single molecule binding to its receptor or not by pure biochemical chance can make a macroscopic difference and it is not clear that this would be just noise irrelevant to cognition (because we know that at molecular level cells can rely on the randomness of biochemical reactions to function properly, so why neurons wouldn't?)

@MartinModrak The "I haven't seen any evidence why you'd neer to simulate the whole body on a molecular level" is the delusion I'm talking about. People think that because they don't understand why something can't be possible, therefore it must be possible. It's the same logical fallacy the AI risk folk use.

@MartinModrak I agree that the timing of the signals between neurons is important and I kinda elided it in my previous comment. However in itself it doesn't mean that neuron has to be very complex. As you can notice, the Hodgkin-Huxley model that you are referring to contains only a handful of parameters. Which is a far cry from molecular-level simulation.

Regarding randomness, I agree that it might play its role, but I don't see why this is significant. I mean, we can generate (pseudo-)random numbers.

@Shump I don't think that you need a fine-grained body model to simulate a brain for three reasons:

  1. Some cells are randomly dying and dividing in your body all the time, and it doesn't significantly affect your mind. And even if it does to some extent, you don't need to simulate specifically which cell has died to produce its effect on the mind.

  2. The bandwidth of the connections between the brain and the rest of the body is pretty limited. It boils down to a limited number of nerves and the effects of hormones and other chemicals in the blood. This is not enough to pass to the brain the information about everything that happens in the body.

  3. In the past several years there has been a lot of research meaningfully using neuronal cells culture, without any body.

Regarding the (im-)possibility, yeah, if you don't have any reasons to believe that something is impossible, then you have to assume that it might be possible. That's basic logic.

More related questions