So, I guess the question boils down to, how seriously should I consider switching into the field of AI Alignment, and if not, what else should I do instead?
I think you should at least take the question seriously. You should consider becoming in involved in AI Alignment to the extent that you think doing so will be the highest value strategy, accounting for opportunity costs. An estimate for this could be derived using the interplay between your answers to the following basic considerations:
What are your goals?
What are the most promising methods for pursuing your various goals?
What resources do you have, and how effective would investing those resources be, on a method by method and goal by goal basis?
An example set of (short and incomplete) answers which would lead you to conclude “I should switch to the field of AI Alignment” is:
I have the personal aptitude to work on this problem. It would be better for me to devote my personal time to this project than to pay other people to work on this project for me, for example. My normal ever day goals can also be effectively pursued by moving to the field.
Like should I avoid working on AI at all and just do something fun like game design, or is it still a good idea to push forward ML despite the risks?
If you’re not doing bleeding edge research (and no one doing bleeding edge research is reading your papers), your personal negative impact on AI Alignment efforts can be more effectively offset by making more money and then donating e.g. $500 to MIRI (or related) than changing career.
And if switching to AI Alignment should be done, can it be a career or will I need to find something else to pay the bills with as well?
AI Alignment is considered by many to be literally the most important problem in the world. If you can significantly contribute to AI Alignment, you will be able to find someone to give you money.
If you can’t significantly personally contribute to AI Alignment but still think the problem is important, I would advise advancing some other career and donating money to alignment efforts, starting a youtube channel and spreading awareness of the problem, etc.
I am neither familiar with you nor an alignment researcher, so I will eschew giving specific career advice.
I think you should at least take the question seriously. You should consider becoming in involved in AI Alignment to the extent that you think doing so will be the highest value strategy, accounting for opportunity costs. An estimate for this could be derived using the interplay between your answers to the following basic considerations:
What are your goals?
What are the most promising methods for pursuing your various goals?
What resources do you have, and how effective would investing those resources be, on a method by method and goal by goal basis?
An example set of (short and incomplete) answers which would lead you to conclude “I should switch to the field of AI Alignment” is:
In addition to “normal” every day goals, I want extreme things. I care about the broad scope future of Humanity to a degree even roughly corresponding to what basic math suggests.
The best way to achieve my extreme goals is to break the current limitations on intelligence (brains need to fit in very small boxes, synapses fire in slow motion, you need to train each individual human, training time for individual humans is bad, I/O speed is complete trash, etc.) and to safely exploit this.
I have the personal aptitude to work on this problem. It would be better for me to devote my personal time to this project than to pay other people to work on this project for me, for example. My normal ever day goals can also be effectively pursued by moving to the field.
If you’re not doing bleeding edge research (and no one doing bleeding edge research is reading your papers), your personal negative impact on AI Alignment efforts can be more effectively offset by making more money and then donating e.g. $500 to MIRI (or related) than changing career.
AI Alignment is considered by many to be literally the most important problem in the world. If you can significantly contribute to AI Alignment, you will be able to find someone to give you money.
If you can’t significantly personally contribute to AI Alignment but still think the problem is important, I would advise advancing some other career and donating money to alignment efforts, starting a youtube channel and spreading awareness of the problem, etc.
I am neither familiar with you nor an alignment researcher, so I will eschew giving specific career advice.