I don’t see why humanity can make rapid progress on fields like ML while not having the ability to make progress on AI alignment.
The reason normally given is that AI capability is much easier to test and optimise than AI safety. Much like philosophy, it’s very unclear when you are making progress, and sometimes unclear if progress is even possible. It doesn’t help that AI alignment isn’t particularly profitable in the short term.
The reason normally given is that AI capability is much easier to test and optimise than AI safety. Much like philosophy, it’s very unclear when you are making progress, and sometimes unclear if progress is even possible. It doesn’t help that AI alignment isn’t particularly profitable in the short term.