I genuinely don’t understand why a group which is highly truth-seeking and dispassionately interested in the validity of their very consequential arguments feels so little reason to engage with counter-arguments to their core claims which have been well-received.
A bunch of the more pessimistic people have in practice spent a decent amount of time trying to argue with (e.g.) Paul Christiano and other people who are more optimistic. So, it’s not as though the total time spent engaging with counter-arguments is small.
Additionally, I think there are basically two different questions here:
Should people who are very pessimistic be interested in spending a bunch of time engaging with counterarguments? This could be either to argue to bystanders or get a better model of reality for themselves and/or the counterparty.
Who should people who are very pessimistic aim to engage with and what counterarguments should they discuss?
I’m pretty sympathetic to the take that pessimistic people should spend more time engaging (1), but not that sure that immediately engaging with specifically “AI optimists” is the best approach (2).
(FWIW, I think both AI optimists and Yudkowsky and Nate often make important errors wrt. various arguments at least when these arguments are made publicly in writing.)
A bunch of the more pessimistic people have in practice spent a decent amount of time trying to argue with (e.g.) Paul Christiano and other people who are more optimistic. So, it’s not as though the total time spent engaging with counter-arguments is small.
Additionally, I think there are basically two different questions here:
Should people who are very pessimistic be interested in spending a bunch of time engaging with counterarguments? This could be either to argue to bystanders or get a better model of reality for themselves and/or the counterparty.
Who should people who are very pessimistic aim to engage with and what counterarguments should they discuss?
I’m pretty sympathetic to the take that pessimistic people should spend more time engaging (1), but not that sure that immediately engaging with specifically “AI optimists” is the best approach (2).
(FWIW, I think both AI optimists and Yudkowsky and Nate often make important errors wrt. various arguments at least when these arguments are made publicly in writing.)
ETA: there is relevant context in this post from Nate: Hashing out long-standing disagreements seems low-value to me