If we fail to decode the brain's learning algorithm, this question will resolve ambiguously.
This question will resolve positively if a system trained using the brain's learning algorithm either surpasses the SOTA using backpropogation, or matches the SOTA but uses <1/2 the flops, on a major benchmark before 2033. If it fails to do so, this question resolves negatively.
Examples of marjor benchmarks are:
1) Playing go
2) Imagenet
3) Big Bench
@vluzko That's a good point. I was brushing everything under "backprop", which is incorrect. I want to have a market on whether Hinton's conjecture that current NNs have better learning algorithms than the brains. I'm not sure how to cache this out as a question. I might change the question to "if you train an n parameter nural net vs an n-parameter simulation isomorphic to the brain, which one does better on standard ML benchmarks?" But maybe the difference shows up in how both systems scale with parameter count.
I guess another operationalization is that if we decode the brain, and base a new learning system which learns the same way (part of) the brain does, will that system outperform e.g. a SOTA transformer model, or some other ANN? That's still too vague though.
Any recommendations would be appreciated.