
I am a full-stack software engineer (web dev). I know frontend and backend equally well. I know how to scale an app, how to manage servers/linux.
My question is how I can contribute the most to AI safety. Is it learning generative ai, then apply to an ai research positions; getting a job in a ai research lab as a web dev; is it writing about ai safety/promoting AI safety...
The more detailed the answer, reasoning and plan the better!
Rewards will be given by the amount of upvotes:
n1: 500
n2:250
n3:100
n4:75
n5:50
n6:25
The first thing I would do is to go through 80000 hours' guide on this: https://80000hours.org/problem-profiles/artificial-intelligence/#what-can-you-do-concretely-to-help
Then I would work through BlueDot's AI Safety Fundamentals course. You'll probably want this one: https://aisafetyfundamentals.com/alignment-fast-track/
My final piece of advice is to keep in mind that if you seek to work in one of the big AI labs, you're probably going to be working to advance capabilities, even if the position is labeled something to do with safety. This can be net positive, but it's very easy to let your salary bias your opinion of where the balance lies.