while also explicitly naming “rationalists” in your list of groups that are trying to destroy religion
Quite honestly, if he’s not mentioning things like that, then the rest of what he says comes across as lying by omission to people who are sick and tired of being lied to by omission.
I would expect that the net result of this talk is to makes anyone sympathetic to it discount the opinions of many of the people who’ve put the most work into understanding e.g. technical AI safety or AI governance.
My prior is that most of what is called “AI safety” work is Timnit compliance, not notkilleveryoneist work, but I’m open to updating.
Quite honestly, if he’s not mentioning things like that, then the rest of what he says comes across as lying by omission to people who are sick and tired of being lied to by omission.
It’s of course true that rationalists tend to be hostile to religion, and that this would be untruthful to deny.
But you only have a limited amount of things you can say in a talk. If you bring up something, it’s because you want to explicitly emphasize and draw attention to that. Everything else gets omitted automatically, and there’s nothing dishonest about that. Most of the audience isn’t even going to know who rationalists are, I don’t think that anyone would have reacted to the speech negatively if that part had been left out—they’d just have focused on the things that geoffrey did say that they agreed with.
And if one did want to bring up the religion angle, one could have done it in a way that emphasized the agreement. “Even many liberals are now afraid of humanity creating a faux god that they cannot control. While they do not think of it in those terms, even they can see that the mania of the most reckless AI developers is setting up a new Tower of Babel that may shatter not only language but the world itself.”
Yes. And way too much ‘AI safety work’ boils down to ‘getting paid huge amounts by AI companies to do safety-washing & public relations, to kinda sorta help save humanity, but without upsetting my Bay Area roommates & friends & lovers who work on AI capabilities development’.
Quite honestly, if he’s not mentioning things like that, then the rest of what he says comes across as lying by omission to people who are sick and tired of being lied to by omission.
My prior is that most of what is called “AI safety” work is Timnit compliance, not notkilleveryoneist work, but I’m open to updating.
It’s of course true that rationalists tend to be hostile to religion, and that this would be untruthful to deny.
But you only have a limited amount of things you can say in a talk. If you bring up something, it’s because you want to explicitly emphasize and draw attention to that. Everything else gets omitted automatically, and there’s nothing dishonest about that. Most of the audience isn’t even going to know who rationalists are, I don’t think that anyone would have reacted to the speech negatively if that part had been left out—they’d just have focused on the things that geoffrey did say that they agreed with.
And if one did want to bring up the religion angle, one could have done it in a way that emphasized the agreement. “Even many liberals are now afraid of humanity creating a faux god that they cannot control. While they do not think of it in those terms, even they can see that the mania of the most reckless AI developers is setting up a new Tower of Babel that may shatter not only language but the world itself.”
Yes. And way too much ‘AI safety work’ boils down to ‘getting paid huge amounts by AI companies to do safety-washing & public relations, to kinda sorta help save humanity, but without upsetting my Bay Area roommates & friends & lovers who work on AI capabilities development’.