I didn’t come across evidence that Wiener did update his beliefs.
Do you think he should have updated his beliefs, if so how? Given that he started writing about this stuff in 1947, and died in 1964, I’m not sure what kind of update he could have possibly (ideally) performed, that might justify the conclusion that he “doesn’t seem to have updated much in response to incoming evidence”.
Perhaps one update may be that unemployment isn’t as urgent a problem as he thought, assuming he did originally think it really urgent. But note that in the second writing I linked to, 13 years after his first, he no longer talked about unemployment. If he both thought the issue urgent and failed to update, don’t you think he would have repeated his warnings in an article dedicated to “the social consequences of [cybernetic techniques]”?
Note that the email exchange with Luke was very long. Taking enough care so as to make sure that every statement that I made was epistemically justified would have been prohibitively time consuming.
This seems like a poor excuse, given that the statement in question was part of the main conclusions of the Wiener project, not a tangential remark, which is why Luke chose to repeat it for public consumption. In any case, do you currently think it sufficiently justified to be included in Luke’s post?
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
This seems like a poor excuse, given that the statement in question was part of the main conclusions of the Wiener project, not a tangential remark, which is why Luke chose to repeat it for public consumption.
To my mind, the key takeaway from the Wiener case study is that the juxtaposition of
(i) Automation hasn’t dramatically increased unemployment
(ii) Wiener expressed concern that automation would dramatically increase unemployment.
shouldn’t be taken as evidence that it’s not possible to make predictions about AI. My original justification for this takeaway was “Wiener was wrong, but his methodology was bad.” Your view seems to be “Wiener wasn’t wrong,” but while different from what I said, this is is also a justification for the takeaway. So I don’t think that it matters much either way.
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
Thanks for the feedback.
Your view seems to be “Wiener wasn’t wrong,” but while different from what I said, this is is also a justification for the takeaway. So I don’t think that it matters much either way.
Ok, this makes your position more understandable. I guess I was thinking that Wiener’s case also has relevance for other issues that we care about, for example what kind of epistemic standards we can expect mainstream AGI researchers (or mainstream elites in general) to adopt when thinking about the future.
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
A third party perspective: I hadn’t noticed this while watching the thread in RSS, so I went back and checked. I now think that if I were Wei Dai, the change I’d want to make in future threads would be to avoid language like “really wrong” and “a poor excuse” in favour of less loaded terms like “a big mistake” or “not a good reason”.
Do you think he should have updated his beliefs, if so how? Given that he started writing about this stuff in 1947, and died in 1964, I’m not sure what kind of update he could have possibly (ideally) performed, that might justify the conclusion that he “doesn’t seem to have updated much in response to incoming evidence”.
Perhaps one update may be that unemployment isn’t as urgent a problem as he thought, assuming he did originally think it really urgent. But note that in the second writing I linked to, 13 years after his first, he no longer talked about unemployment. If he both thought the issue urgent and failed to update, don’t you think he would have repeated his warnings in an article dedicated to “the social consequences of [cybernetic techniques]”?
This seems like a poor excuse, given that the statement in question was part of the main conclusions of the Wiener project, not a tangential remark, which is why Luke chose to repeat it for public consumption. In any case, do you currently think it sufficiently justified to be included in Luke’s post?
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
To my mind, the key takeaway from the Wiener case study is that the juxtaposition of
(i) Automation hasn’t dramatically increased unemployment (ii) Wiener expressed concern that automation would dramatically increase unemployment.
shouldn’t be taken as evidence that it’s not possible to make predictions about AI. My original justification for this takeaway was “Wiener was wrong, but his methodology was bad.” Your view seems to be “Wiener wasn’t wrong,” but while different from what I said, this is is also a justification for the takeaway. So I don’t think that it matters much either way.
Thanks for the feedback.
Ok, this makes your position more understandable. I guess I was thinking that Wiener’s case also has relevance for other issues that we care about, for example what kind of epistemic standards we can expect mainstream AGI researchers (or mainstream elites in general) to adopt when thinking about the future.
A third party perspective: I hadn’t noticed this while watching the thread in RSS, so I went back and checked. I now think that if I were Wei Dai, the change I’d want to make in future threads would be to avoid language like “really wrong” and “a poor excuse” in favour of less loaded terms like “a big mistake” or “not a good reason”.