As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
This seems like a poor excuse, given that the statement in question was part of the main conclusions of the Wiener project, not a tangential remark, which is why Luke chose to repeat it for public consumption.
To my mind, the key takeaway from the Wiener case study is that the juxtaposition of
(i) Automation hasn’t dramatically increased unemployment
(ii) Wiener expressed concern that automation would dramatically increase unemployment.
shouldn’t be taken as evidence that it’s not possible to make predictions about AI. My original justification for this takeaway was “Wiener was wrong, but his methodology was bad.” Your view seems to be “Wiener wasn’t wrong,” but while different from what I said, this is is also a justification for the takeaway. So I don’t think that it matters much either way.
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
Thanks for the feedback.
Your view seems to be “Wiener wasn’t wrong,” but while different from what I said, this is is also a justification for the takeaway. So I don’t think that it matters much either way.
Ok, this makes your position more understandable. I guess I was thinking that Wiener’s case also has relevance for other issues that we care about, for example what kind of epistemic standards we can expect mainstream AGI researchers (or mainstream elites in general) to adopt when thinking about the future.
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
A third party perspective: I hadn’t noticed this while watching the thread in RSS, so I went back and checked. I now think that if I were Wei Dai, the change I’d want to make in future threads would be to avoid language like “really wrong” and “a poor excuse” in favour of less loaded terms like “a big mistake” or “not a good reason”.
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
To my mind, the key takeaway from the Wiener case study is that the juxtaposition of
(i) Automation hasn’t dramatically increased unemployment (ii) Wiener expressed concern that automation would dramatically increase unemployment.
shouldn’t be taken as evidence that it’s not possible to make predictions about AI. My original justification for this takeaway was “Wiener was wrong, but his methodology was bad.” Your view seems to be “Wiener wasn’t wrong,” but while different from what I said, this is is also a justification for the takeaway. So I don’t think that it matters much either way.
Thanks for the feedback.
Ok, this makes your position more understandable. I guess I was thinking that Wiener’s case also has relevance for other issues that we care about, for example what kind of epistemic standards we can expect mainstream AGI researchers (or mainstream elites in general) to adopt when thinking about the future.
A third party perspective: I hadn’t noticed this while watching the thread in RSS, so I went back and checked. I now think that if I were Wei Dai, the change I’d want to make in future threads would be to avoid language like “really wrong” and “a poor excuse” in favour of less loaded terms like “a big mistake” or “not a good reason”.