A bunch of points that are kind of the same point:
Not suffering fools gladly is a common way to punish people for wasting your time or strategically misunderstanding you. Expecting that you will lose status for doing this to someone is an incentive to, if you’re going to engage, actually engage with what they said, rather than to misread them. (It’s also an incentive not to engage with them.)
Similarly it’s a common dynamic that powerful people will pretend to like people who are interesting but don’t cost them political points to endorse, while kind of ignoring them, but once they’re actually costing you politically then the pretense will go away, and I do think this pretense is bad for actually engaging with ideas and positions (e.g. I think many religious people might have said that Richard Dawkins was a polite debate partner if he stuck to just saying that, and didn’t have more fiery rhetoric that was causing a movement to build, even though his position didn’t change).
I think Eliezer has said he took a more polite and hopeful tone in the past (e.g. this and this and this versus this and this and this), and felt that people basically didn’t engage with him and did whatever they were going to do anyway. So he thought he should try something different, and it indeed has been more successful at causing people to engage with the perspective that AI is by-default going to be an extinction-level bad event.
I think that broadly, Eliezer is more hostile due to feeling like he’s in a hostile epistemic environment. Personally I take this as a cue to more seriously consider that the hostility is appropriate to the world we find ourselves in (which also is a world where people have seemingly had their head in the sand for over a decade and very powerful people are racing to build doomsday machines while largely not openly engaging with this fact nor the arguments that this is an atrocity).
Some other factors that are relevant:
I think it’s accurate to model Eliezer as having a chronic illness to do with fatigue, which means he just doesn’t have a lot of time/effort to put into lots of public dialogues. This is a factor that makes him less invest in back-and-forths and understanding others.
Eliezer’s internal thoughts are also in a pretty different language to others, which can lead to miscommunications. I think in the example you give, with the potted plant, the questioner kind of indicated that they hadn’t clearly read the post, where explicitly says he’s asking about pre-ChatGPT, and the person asks whether a statement made today would count; if you assume that they did read the post then his reply makes more sense (it wasn’t intended primarily as an insult, as much as an attempt to to give a short answer to a somewhat pointless and irrelevant question).
To be clear I think he could do a better job of understanding people he’s writing with via text format, and I am still confused about why he seems (to me) below average at this.
A bunch of points that are kind of the same point:
Not suffering fools gladly is a common way to punish people for wasting your time or strategically misunderstanding you. Expecting that you will lose status for doing this to someone is an incentive to, if you’re going to engage, actually engage with what they said, rather than to misread them. (It’s also an incentive not to engage with them.)
Similarly it’s a common dynamic that powerful people will pretend to like people who are interesting but don’t cost them political points to endorse, while kind of ignoring them, but once they’re actually costing you politically then the pretense will go away, and I do think this pretense is bad for actually engaging with ideas and positions (e.g. I think many religious people might have said that Richard Dawkins was a polite debate partner if he stuck to just saying that, and didn’t have more fiery rhetoric that was causing a movement to build, even though his position didn’t change).
I think Eliezer has said he took a more polite and hopeful tone in the past (e.g. this and this and this versus this and this and this), and felt that people basically didn’t engage with him and did whatever they were going to do anyway. So he thought he should try something different, and it indeed has been more successful at causing people to engage with the perspective that AI is by-default going to be an extinction-level bad event.
I think that broadly, Eliezer is more hostile due to feeling like he’s in a hostile epistemic environment. Personally I take this as a cue to more seriously consider that the hostility is appropriate to the world we find ourselves in (which also is a world where people have seemingly had their head in the sand for over a decade and very powerful people are racing to build doomsday machines while largely not openly engaging with this fact nor the arguments that this is an atrocity).
Some other factors that are relevant:
I think it’s accurate to model Eliezer as having a chronic illness to do with fatigue, which means he just doesn’t have a lot of time/effort to put into lots of public dialogues. This is a factor that makes him less invest in back-and-forths and understanding others.
Eliezer’s internal thoughts are also in a pretty different language to others, which can lead to miscommunications. I think in the example you give, with the potted plant, the questioner kind of indicated that they hadn’t clearly read the post, where explicitly says he’s asking about pre-ChatGPT, and the person asks whether a statement made today would count; if you assume that they did read the post then his reply makes more sense (it wasn’t intended primarily as an insult, as much as an attempt to to give a short answer to a somewhat pointless and irrelevant question).
To be clear I think he could do a better job of understanding people he’s writing with via text format, and I am still confused about why he seems (to me) below average at this.