This has a very low n=16, and so presumably some strong selection biases. (Surely these are not the only people who have published thought-out opinions on the the likelihood of fooming.) Without an analysis of the reasons these people give for their views, I don’t think this correlation is very interesting.
Thanks for the comment. There is some “multiple hypothesis testing” effect at play in the sense that I constructed the graph because of a hunch that I’d see a correlation of this type, based on a few salient examples that I knew about. I wouldn’t have made a graph of some other comparison where I didn’t expect much insight.
However, when it came to adding people, I did so purely based on whether I could clearly identify their views on the hard/soft question and years worked in industry. I’m happy to add anyone else to the graph if I can figure out the requisite data points. For instance, I wanted to add Vinge but couldn’t clearly tell what x-axis value to use for him. For Kurzweil, I didn’t really know what y-axis value to use.
Agree RE the low N, a wider survey would be much more informative.
But if the proposition was “belief in a cortex-wide neural code” and the x axis was “years working in neuroscience”, would the correlation still be uninteresting?
Obviously I’m suggesting that some of the dismissal of the correlation might be due to bias (in the LW community generally, not you specifically) in favor of a hard foom. To me, if belief in a proposition in field X varies inversely with experience in field X… well all else equal that’s grounds to give the proposition a bit more scrutiny.
But if the proposition was “belief in a cortex-wide neural code” and the x axis was “years working in neuroscience”, would the correlation still be uninteresting?
If there are thousands of people working in neuroscience and you present a poll of 16 of them which shows correlation between the results and how long they’ve been working in the field, and you leave out how you selected them and why they each think what they do and how they might respond to one another, then I wouldn’t assign much credence to the results.
Obviously I’m suggesting that some of the dismissal of the correlation might be due to bias
Or the correlation might be to the pollster’s bias in choosing respondents. Or (most likely) it might be accidental due to underpowered analysis.
To me, if belief in a proposition in field X varies inversely with experience in field X… well all else equal that’s grounds to give the proposition a bit more scrutiny.
To be clear, I’m saying that this study is far too underpowered, underspecified and under-explained to cause me to believe that the “belief in a proposition in field X varies inversely with experience ”. If I believed that, I would come to the same conclusion as you do.
This has a very low n=16, and so presumably some strong selection biases. (Surely these are not the only people who have published thought-out opinions on the the likelihood of fooming.) Without an analysis of the reasons these people give for their views, I don’t think this correlation is very interesting.
Thanks for the comment. There is some “multiple hypothesis testing” effect at play in the sense that I constructed the graph because of a hunch that I’d see a correlation of this type, based on a few salient examples that I knew about. I wouldn’t have made a graph of some other comparison where I didn’t expect much insight.
However, when it came to adding people, I did so purely based on whether I could clearly identify their views on the hard/soft question and years worked in industry. I’m happy to add anyone else to the graph if I can figure out the requisite data points. For instance, I wanted to add Vinge but couldn’t clearly tell what x-axis value to use for him. For Kurzweil, I didn’t really know what y-axis value to use.
Agree RE the low N, a wider survey would be much more informative. But if the proposition was “belief in a cortex-wide neural code” and the x axis was “years working in neuroscience”, would the correlation still be uninteresting?
Obviously I’m suggesting that some of the dismissal of the correlation might be due to bias (in the LW community generally, not you specifically) in favor of a hard foom. To me, if belief in a proposition in field X varies inversely with experience in field X… well all else equal that’s grounds to give the proposition a bit more scrutiny.
If there are thousands of people working in neuroscience and you present a poll of 16 of them which shows correlation between the results and how long they’ve been working in the field, and you leave out how you selected them and why they each think what they do and how they might respond to one another, then I wouldn’t assign much credence to the results.
Or the correlation might be to the pollster’s bias in choosing respondents. Or (most likely) it might be accidental due to underpowered analysis.
To be clear, I’m saying that this study is far too underpowered, underspecified and under-explained to cause me to believe that the “belief in a proposition in field X varies inversely with experience ”. If I believed that, I would come to the same conclusion as you do.
Fair. The closest thing I’ve see to that is this http://www.sophia.de/pdf/2014_PT-AI_polls.pdf (just looking at the Top100 category and ignoring the others). And as I was writing this I remembered that I shouldn’t be putting much credence into expert opinion in this field anyway https://intelligence.org/files/PredictingAI.pdf, so yes you’re right this correlation doesn’t say much.