But I think there are hardcore engineers that would be useful to convince.
Sure, because it would be nice if there were 0 instead of 2 prominent ML experts who were unconvinced. But 2 people is not a consensus, and the actual difference of opinion between Ng, LeCun, and everyone else is very small, mostly dealing with emphasis instead of content.
From a surivey linked from that article (that that article cherry-picks a single number from… sigh). It looks like there is a disconnect between theorists and practitioners with theorists being more likely to believe in hard take off (theorists think we have a 15% chance likely that we will get super intelligence within 2 years of human intelligence and practitioners a 5%).
I think you would find nuclear physicists giving a higher probability in the idea of chain reactions pretty quickly once a realistic pathway that released 2 neutrons was shown.
mostly dealing with emphasis instead of content.
MIRI/FHI has captured the market for worrying about AI. If they are worrying about the wrong things, that could be pretty bad.
Sure, because it would be nice if there were 0 instead of 2 prominent ML experts who were unconvinced. But 2 people is not a consensus, and the actual difference of opinion between Ng, LeCun, and everyone else is very small, mostly dealing with emphasis instead of content.
From a surivey linked from that article (that that article cherry-picks a single number from… sigh). It looks like there is a disconnect between theorists and practitioners with theorists being more likely to believe in hard take off (theorists think we have a 15% chance likely that we will get super intelligence within 2 years of human intelligence and practitioners a 5%).
I think you would find nuclear physicists giving a higher probability in the idea of chain reactions pretty quickly once a realistic pathway that released 2 neutrons was shown.
MIRI/FHI has captured the market for worrying about AI. If they are worrying about the wrong things, that could be pretty bad.