I experience cognitive dissonance, because my model of Eliezer is someone who is intelligent, rational, and aiming at using at least their public communications to increase the chance that AI goes well.
Consider that he is just as human and fallible as everyone else. “None of Eliezer’s public communication is -EV for AI safety” is such an incredibly high bar it is almost certainly not true. We all say things that are poor.
I suspect that some of my dissonance does result from an illusion of consistency and a failure to appreciate how multi-faceted people can really be. I naturally think of people as agents and not as a collection of different cognitive circuits. I’m not ready to assume that this explains all of the gap between my expectations and reality, but it’s probably part of it.
Consider that he is just as human and fallible as everyone else. “None of Eliezer’s public communication is -EV for AI safety” is such an incredibly high bar it is almost certainly not true. We all say things that are poor.
I suspect that some of my dissonance does result from an illusion of consistency and a failure to appreciate how multi-faceted people can really be. I naturally think of people as agents and not as a collection of different cognitive circuits. I’m not ready to assume that this explains all of the gap between my expectations and reality, but it’s probably part of it.