I mostly believe this. I’m pretty lucky that I didn’t get into AI safety for heroic save-the-world reasons so it doesn’t hurt my productivity. I currently work on research aimed at reducing s-risk at CLR.
Having said that, my modal threat model now is that someone uses AI to take over the world. I would love for more people to work on closely scrutinising leaders of labs and other figures in power, or more generally work on trying to make the gains from transformative AI distributed by default
modal threat model now is that someone uses AI to take over the world
Highly possible if one company really pulls ahead! Distributing AI is a good antidote to outright coups but has other problems.
Personally I’m a fan of state-enforced democratic governance over use of AI, or perhaps international democratic governance, and I’m not sure how this can be done with technical work. (For reasons of personal fit, I think I should do that.)
I mostly believe this. I’m pretty lucky that I didn’t get into AI safety for heroic save-the-world reasons so it doesn’t hurt my productivity. I currently work on research aimed at reducing s-risk at CLR.
Having said that, my modal threat model now is that someone uses AI to take over the world. I would love for more people to work on closely scrutinising leaders of labs and other figures in power, or more generally work on trying to make the gains from transformative AI distributed by default
Highly possible if one company really pulls ahead! Distributing AI is a good antidote to outright coups but has other problems.
Personally I’m a fan of state-enforced democratic governance over use of AI, or perhaps international democratic governance, and I’m not sure how this can be done with technical work. (For reasons of personal fit, I think I should do that.)