Neurotechnology is most relevant for AI safety with respect to...
18
Never closes
BCIs extracting human knowledge
BCIs creating a reward signal
Neurotech enhancing humans
Understanding human value formation
Cyborgism
Whole brain emulation
Hat tip to Sumner Norman and Lisa Thiergart. More info on the neurotech definitions and their potential AI safety use cases: https://www.lesswrong.com/posts/KQSpRoQBz7f6FcXt3/distillation-of-neurotech-and-alignment-workshop-january-1
Disclaimers:
This question is part of Foresight’s 2023 Vision Weekends to help spark discussion amongst participants, so the phrasing and resolution criteria may be vaguer than I would normally like for this site. Apologies for that. We thought it would still be useful to make the market public to potentially inform other discussions.
If you would to add alternative answers, please do so in the comments!
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will there be serious AI safety drama at Google or Deepmind before 2026?
54% chance
What AI safety incidents will occur in 2025?
Will neurotechnology enable AI to predict and classify human decisions, along with their influencing factors, by 2030?
70% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
88% chance
10. The first real AI safety incident will occur.
52% chance
Will there be serious AI safety drama at Meta AI before 2026?
45% chance
Will something AI-related be an actual infohazard?
76% chance
Will Bing Chat be the breakthrough for AI safety research?
4% chance
Will prioritizing corrigible AI produce safe results?
45% chance
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
30% chance