[Question] What if we solve AI Safety but no one cares

Suppose that next year, AI Safety is solved, the solution is approved by Eliezer Yudkowsky, etc.

How do we actually get people to follow this solution?

It seems to me a lot of people or companies will ignore any AI Safety solutions for the same reasons they are currently ignoring AI Safety:
- They think AGI is still very far away, so AI Safety methods won’t need to be applied to the development of current narrow AI systems
- The concepts of AI Safety are difficult to understand, leading to incorrect application of an AI Safety solution or failure to apply it at all
- They will fall behind their competitors or make less money if they adhere to the AI Safety solution
- They have a new cool idea they want to test out and just don’t care about or believe in the concerns raised by AI Safety

Thoughts?

No comments.