
How can I contribute to AI safety?
No bounty left
I am a full-stack software engineer (web dev). I know frontend and backend equally well. I know how to scale an app, how to manage servers/linux.
My question is how I can contribute the most to AI safety. Is it learning generative ai, then apply to an ai research positions; getting a job in a ai research lab as a web dev; is it writing about ai safety/promoting AI safety...
The more detailed the answer, reasoning and plan the better!
Rewards will be given by the amount of upvotes:
n1: 500
n2:250
n3:100
n4:75
n5:50
n6:25
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will someone commit terrorism against an AI lab by the end of 2025 for AI-safety related reasons?
14% chance
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
59% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
87% chance
What AI safety incidents will occur in 2025?
Will AGI create a consensus among experts on how to safely increase AI capabilities?
37% chance
Will someone commit violence in the name of AI safety by 2030?
65% chance
Will there be serious AI safety drama at Meta AI before 2026?
45% chance
Is RLHF good for AI safety? [resolves to poll]
48% chance
Will prioritizing corrigible AI produce safe results?
45% chance
Is slowing down AGI good for AI safety? [resolves to poll]
83% chance