There seem to me different categories of being doomful.
There are people who think that for theoretic reasons AI alignment is hard or impossible.
There are also people who are more focused practical issues like AI companies being run in a profit maximizing way and having no incentives to care for most of the population.
Saying, “You can’t AI box for theoretical reasons” is different from saying “Nobody will AI box for economic reasons”.
There seem to me different categories of being doomful.
There are people who think that for theoretic reasons AI alignment is hard or impossible.
There are also people who are more focused practical issues like AI companies being run in a profit maximizing way and having no incentives to care for most of the population.
Saying, “You can’t AI box for theoretical reasons” is different from saying “Nobody will AI box for economic reasons”.