I don’t think this was a good debate, but I felt I was in a position where I would have had to invest a lot of time to do better by the other side’s standards.
Quintin and I have agreed to do a X Space debate, and I’m optimistic that format can be more productive. While I don’t necesarily expect to update my view much, I am interested to at least understand what the crux is, which I’m not super clear on atm.
Here’s a meta-level opinion:
I don’t think it was the best choice of Quintin to keep writing replies that were disproportionally long compared to mine.
There’s such a thing as zooming claims and arguments out. When I write short tweets, that’s what I’m doing. If he wants to zoom in on something, I think it would be a better conversation if he made an effort to do it less at a time, or do it for fewer parts at a time, for a more productive back & forth.
I don’t think it was the best choice of Quintin to keep writing replies that were disproportionally long compared to mine.
I understand why you feel this way, but I do think that it was sort of necessary to respond like this, primarily because I see a worrisome asymmetry between the arguments for AI doom and AI being safe by default.
AI doom arguments are more intuitive than AI safety by default arguments, making AI doom arguments requires less technical knowledge than AI safety by default arguments, and critically the AI doom arguments are basically entirely wrong, and the AI safety by default arguments are mostly correct.
This, Quintin Pope has to respond at length, since refuting bullshit or wrong theories takes very long compared to making intuitive, but wrong arguments for AI doom.
Quintin and I have agreed to do a X Space debate, and I’m optimistic that format can be more productive.
Alright, that might work. I’m interested to see whether you will write up a transcript, or whether I will be able to join the X space debate.
“AI doom arguments are more intuitive than AI safety by default arguments, making AI doom arguments requires less technical knowledge than AI safety by default arguments, and critically the AI doom arguments are basically entirely wrong, and the AI safety by default arguments are mostly correct.”
I really don’t like that you make repeated assertions like this. Simply claiming that your side is right doesn’t add anything to the discussion and easily becomes obnoxious.
I really don’t like that you make repeated assertions like this. Simply claiming that your side is right doesn’t add anything to the discussion and easily becomes obnoxious.
Yes, I was trying to be short rather than write the long comment or post justifying this claim, because I had to write at least two long comments on this issue.
But thank you for point here. I definitely agree that I was wrong to just claim that I was right without trying to show why, especially explaining things.
Now that I’m thinking that text-based interaction is actually bad, since we can’t communicate a lot of information.
Appreciate the detailed analysis.
I don’t think this was a good debate, but I felt I was in a position where I would have had to invest a lot of time to do better by the other side’s standards.
Quintin and I have agreed to do a X Space debate, and I’m optimistic that format can be more productive. While I don’t necesarily expect to update my view much, I am interested to at least understand what the crux is, which I’m not super clear on atm.
Here’s a meta-level opinion:
I don’t think it was the best choice of Quintin to keep writing replies that were disproportionally long compared to mine.
There’s such a thing as zooming claims and arguments out. When I write short tweets, that’s what I’m doing. If he wants to zoom in on something, I think it would be a better conversation if he made an effort to do it less at a time, or do it for fewer parts at a time, for a more productive back & forth.
I understand why you feel this way, but I do think that it was sort of necessary to respond like this, primarily because I see a worrisome asymmetry between the arguments for AI doom and AI being safe by default.
AI doom arguments are more intuitive than AI safety by default arguments, making AI doom arguments requires less technical knowledge than AI safety by default arguments, and critically the AI doom arguments are basically entirely wrong, and the AI safety by default arguments are mostly correct.
This, Quintin Pope has to respond at length, since refuting bullshit or wrong theories takes very long compared to making intuitive, but wrong arguments for AI doom.
Alright, that might work. I’m interested to see whether you will write up a transcript, or whether I will be able to join the X space debate.
“AI doom arguments are more intuitive than AI safety by default arguments, making AI doom arguments requires less technical knowledge than AI safety by default arguments, and critically the AI doom arguments are basically entirely wrong, and the AI safety by default arguments are mostly correct.”
I really don’t like that you make repeated assertions like this. Simply claiming that your side is right doesn’t add anything to the discussion and easily becomes obnoxious.
Yes, I was trying to be short rather than write the long comment or post justifying this claim, because I had to write at least two long comments on this issue.
But thank you for point here. I definitely agree that I was wrong to just claim that I was right without trying to show why, especially explaining things.
Now that I’m thinking that text-based interaction is actually bad, since we can’t communicate a lot of information.