Yeah, I read Eliezer’s chapter “Artificial Intelligence as a Positive and Negative Factor in Global Risk” in Global Catastrophic Risks, and it was impressed with how far in advance he anticipated reactions to the rising popularity of AI safety, what it might be like when the public finally switched from skepticism to genuine concern, and what it might start to look like. Eliezer has also anticipated even safety-conscious work on AI might increase AI risk.
The idea some existing institutions in AI safety, perhaps MIRI, should expand much faster than others so it can keep up with all the published material coming out, and evaluate it, is neglected.
Yeah, I read Eliezer’s chapter “Artificial Intelligence as a Positive and Negative Factor in Global Risk” in Global Catastrophic Risks, and it was impressed with how far in advance he anticipated reactions to the rising popularity of AI safety, what it might be like when the public finally switched from skepticism to genuine concern, and what it might start to look like. Eliezer has also anticipated even safety-conscious work on AI might increase AI risk.
The idea some existing institutions in AI safety, perhaps MIRI, should expand much faster than others so it can keep up with all the published material coming out, and evaluate it, is neglected.