Jonah’s impression is that Weiner had strong views on the subject, doesn’t seem to have updated much in response to incoming evidence
My impression is Jonah may have gotten wrong impressions of Wiener’s views. I also didn’t see where Jonah talked about Wiener not having updated much in response to incoming evidence. (What evidence?) Did you see that in his post, or did he write about it elsewhere?
I also didn’t see where Jonah talked about Wiener not having updated much in response to incoming evidence. (What evidence?) Did you see that in his post, or did he write about it elsewhere?
I wrote this in our full email exchange and didn’t provide justification. I no longer remember what I had in mind, and I may not have had good reasons for saying that.
My best guess is that I was thinking something along the lines of “he didn’t investigate sufficiently thoroughly to solicit and understand other people’s opinions on the subject,” but this is coming primarily from a general strong prior that people don’t solicit other perspectives and try to understand them, rather than anything specific to Wiener, and I recognize that there’s room for disagreement as to what prior is appropriate.
but this is coming primarily from a general strong prior that people don’t solicit other perspectives and try to understand them, rather than anything specific to Wiener
It seems really wrong for you to state any conclusions based solely on your prior, since the whole point of this exercise is to gather evidence about how hard it is to plan for the future. Don’t you think that given the purpose of the project, people would naturally interpret all of your writings from the project as being about the evidence that you found, rather than about your personal priors?
It seems really wrong for you to state any conclusions based solely on your prior
Morally wrong? ;)
the whole point of this exercise is to gather evidence about how hard it is to plan for the future. Don’t you think that given the purpose of the project, people would naturally interpret all of your writings from the project as being about the evidence that you found, rather than about your personal priors?
I didn’t come across evidence that Wiener did update his beliefs.
I don’t necessarily stand by my remark about him not updating his beliefs. Note that the email exchange with Luke was very long. Taking enough care so as to make sure that every statement that I made was epistemically justified would have been prohibitively time consuming.
I didn’t come across evidence that Wiener did update his beliefs.
Do you think he should have updated his beliefs, if so how? Given that he started writing about this stuff in 1947, and died in 1964, I’m not sure what kind of update he could have possibly (ideally) performed, that might justify the conclusion that he “doesn’t seem to have updated much in response to incoming evidence”.
Perhaps one update may be that unemployment isn’t as urgent a problem as he thought, assuming he did originally think it really urgent. But note that in the second writing I linked to, 13 years after his first, he no longer talked about unemployment. If he both thought the issue urgent and failed to update, don’t you think he would have repeated his warnings in an article dedicated to “the social consequences of [cybernetic techniques]”?
Note that the email exchange with Luke was very long. Taking enough care so as to make sure that every statement that I made was epistemically justified would have been prohibitively time consuming.
This seems like a poor excuse, given that the statement in question was part of the main conclusions of the Wiener project, not a tangential remark, which is why Luke chose to repeat it for public consumption. In any case, do you currently think it sufficiently justified to be included in Luke’s post?
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
This seems like a poor excuse, given that the statement in question was part of the main conclusions of the Wiener project, not a tangential remark, which is why Luke chose to repeat it for public consumption.
To my mind, the key takeaway from the Wiener case study is that the juxtaposition of
(i) Automation hasn’t dramatically increased unemployment
(ii) Wiener expressed concern that automation would dramatically increase unemployment.
shouldn’t be taken as evidence that it’s not possible to make predictions about AI. My original justification for this takeaway was “Wiener was wrong, but his methodology was bad.” Your view seems to be “Wiener wasn’t wrong,” but while different from what I said, this is is also a justification for the takeaway. So I don’t think that it matters much either way.
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
Thanks for the feedback.
Your view seems to be “Wiener wasn’t wrong,” but while different from what I said, this is is also a justification for the takeaway. So I don’t think that it matters much either way.
Ok, this makes your position more understandable. I guess I was thinking that Wiener’s case also has relevance for other issues that we care about, for example what kind of epistemic standards we can expect mainstream AGI researchers (or mainstream elites in general) to adopt when thinking about the future.
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
A third party perspective: I hadn’t noticed this while watching the thread in RSS, so I went back and checked. I now think that if I were Wei Dai, the change I’d want to make in future threads would be to avoid language like “really wrong” and “a poor excuse” in favour of less loaded terms like “a big mistake” or “not a good reason”.
My impression is Jonah may have gotten wrong impressions of Wiener’s views. I also didn’t see where Jonah talked about Wiener not having updated much in response to incoming evidence. (What evidence?) Did you see that in his post, or did he write about it elsewhere?
I responded here.
I wrote this in our full email exchange and didn’t provide justification. I no longer remember what I had in mind, and I may not have had good reasons for saying that.
My best guess is that I was thinking something along the lines of “he didn’t investigate sufficiently thoroughly to solicit and understand other people’s opinions on the subject,” but this is coming primarily from a general strong prior that people don’t solicit other perspectives and try to understand them, rather than anything specific to Wiener, and I recognize that there’s room for disagreement as to what prior is appropriate.
It seems really wrong for you to state any conclusions based solely on your prior, since the whole point of this exercise is to gather evidence about how hard it is to plan for the future. Don’t you think that given the purpose of the project, people would naturally interpret all of your writings from the project as being about the evidence that you found, rather than about your personal priors?
Morally wrong? ;)
I didn’t come across evidence that Wiener did update his beliefs.
I don’t necessarily stand by my remark about him not updating his beliefs. Note that the email exchange with Luke was very long. Taking enough care so as to make sure that every statement that I made was epistemically justified would have been prohibitively time consuming.
Do you think he should have updated his beliefs, if so how? Given that he started writing about this stuff in 1947, and died in 1964, I’m not sure what kind of update he could have possibly (ideally) performed, that might justify the conclusion that he “doesn’t seem to have updated much in response to incoming evidence”.
Perhaps one update may be that unemployment isn’t as urgent a problem as he thought, assuming he did originally think it really urgent. But note that in the second writing I linked to, 13 years after his first, he no longer talked about unemployment. If he both thought the issue urgent and failed to update, don’t you think he would have repeated his warnings in an article dedicated to “the social consequences of [cybernetic techniques]”?
This seems like a poor excuse, given that the statement in question was part of the main conclusions of the Wiener project, not a tangential remark, which is why Luke chose to repeat it for public consumption. In any case, do you currently think it sufficiently justified to be included in Luke’s post?
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
To my mind, the key takeaway from the Wiener case study is that the juxtaposition of
(i) Automation hasn’t dramatically increased unemployment (ii) Wiener expressed concern that automation would dramatically increase unemployment.
shouldn’t be taken as evidence that it’s not possible to make predictions about AI. My original justification for this takeaway was “Wiener was wrong, but his methodology was bad.” Your view seems to be “Wiener wasn’t wrong,” but while different from what I said, this is is also a justification for the takeaway. So I don’t think that it matters much either way.
Thanks for the feedback.
Ok, this makes your position more understandable. I guess I was thinking that Wiener’s case also has relevance for other issues that we care about, for example what kind of epistemic standards we can expect mainstream AGI researchers (or mainstream elites in general) to adopt when thinking about the future.
A third party perspective: I hadn’t noticed this while watching the thread in RSS, so I went back and checked. I now think that if I were Wei Dai, the change I’d want to make in future threads would be to avoid language like “really wrong” and “a poor excuse” in favour of less loaded terms like “a big mistake” or “not a good reason”.