As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
Thanks for the feedback.
Your view seems to be “Wiener wasn’t wrong,” but while different from what I said, this is is also a justification for the takeaway. So I don’t think that it matters much either way.
Ok, this makes your position more understandable. I guess I was thinking that Wiener’s case also has relevance for other issues that we care about, for example what kind of epistemic standards we can expect mainstream AGI researchers (or mainstream elites in general) to adopt when thinking about the future.
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
A third party perspective: I hadn’t noticed this while watching the thread in RSS, so I went back and checked. I now think that if I were Wei Dai, the change I’d want to make in future threads would be to avoid language like “really wrong” and “a poor excuse” in favour of less loaded terms like “a big mistake” or “not a good reason”.
Thanks for the feedback.
Ok, this makes your position more understandable. I guess I was thinking that Wiener’s case also has relevance for other issues that we care about, for example what kind of epistemic standards we can expect mainstream AGI researchers (or mainstream elites in general) to adopt when thinking about the future.
A third party perspective: I hadn’t noticed this while watching the thread in RSS, so I went back and checked. I now think that if I were Wei Dai, the change I’d want to make in future threads would be to avoid language like “really wrong” and “a poor excuse” in favour of less loaded terms like “a big mistake” or “not a good reason”.