To play devil’s advocate, what if it’s irrational given the current available information to think that self-improving AI can be developed in the near future, or is more important to work on than other existential risks?
To play devil’s advocate, what if it’s irrational given the current available information to think that self-improving AI can be developed in the near future, or is more important to work on than other existential risks?