I disagree. It seems to me that this choice is, in general, pretty easy to make, and takes naught but common sense. Certainly that’s the case in the given example scenario. Of course there are exceptions, where the choice of reference class is trickier—but in general, no, it’s pretty easy.
(Whether the choice “requires abstractions and/or theory” is another matter. Perhaps it does, in a technical sense. But it doesn’t particularly require talking about abstractions and/or theory, and that matters.)
Sure, there is common sense, available to plenty of people, of which reference classes apply to Ponzi schemes (but, somehow, not to everybody, far from it). Yudkowsky’s point, however, is that the issue of future AIs is entirely analogous, so people who disagree with him on this are as dumb as those taken in by Bernies and Bankmans. Which just seems empirically false—I’m sure that the proportion of AI doom skeptics among ML experts is much higher than that that of Ponzi believers among professional economists. So, if there is progress to be made here, it probably lies in grappling with whatever asymmetries are between these situations. Telling skeptics a hundredth time that they’re just dumb doesn’t look promising.
I mean, the Spokesperson is being dumb, the Scientist is being confused. Most AI researchers aren’t even being Scientists, they have different theoretical models than EY. But some of them don’t immediately discount the Spokesperson’s false-empiricism argument publicly, much like the Scientist tries not to. I think the latter pattern is what has annoyed EY and what he writes against here.
However, a large number of current AI experts do recently seem to be boldly claiming that LLMs will never be sufficient for even AGI, not to mention ASI. So maybe it’s also aimed at them a bit.
But some of them don’t immediately discount the Spokesperson’s false-empiricism argument publicly
Most likely as a part of the usual arguments-as-soldiers political dynamic.
I do think that there’s an actual argument to be made that we have much less empirical evidence regarding AIs compared to Ponzis, and plently of people on both sides of this debate are far too overconfident in their grand theories, EY very much included.
I disagree. It seems to me that this choice is, in general, pretty easy to make, and takes naught but common sense. Certainly that’s the case in the given example scenario. Of course there are exceptions, where the choice of reference class is trickier—but in general, no, it’s pretty easy.
(Whether the choice “requires abstractions and/or theory” is another matter. Perhaps it does, in a technical sense. But it doesn’t particularly require talking about abstractions and/or theory, and that matters.)
Sure, there is common sense, available to plenty of people, of which reference classes apply to Ponzi schemes (but, somehow, not to everybody, far from it). Yudkowsky’s point, however, is that the issue of future AIs is entirely analogous, so people who disagree with him on this are as dumb as those taken in by Bernies and Bankmans. Which just seems empirically false—I’m sure that the proportion of AI doom skeptics among ML experts is much higher than that that of Ponzi believers among professional economists. So, if there is progress to be made here, it probably lies in grappling with whatever asymmetries are between these situations. Telling skeptics a hundredth time that they’re just dumb doesn’t look promising.
I mean, the Spokesperson is being dumb, the Scientist is being confused. Most AI researchers aren’t even being Scientists, they have different theoretical models than EY. But some of them don’t immediately discount the Spokesperson’s false-empiricism argument publicly, much like the Scientist tries not to. I think the latter pattern is what has annoyed EY and what he writes against here.
However, a large number of current AI experts do recently seem to be boldly claiming that LLMs will never be sufficient for even AGI, not to mention ASI. So maybe it’s also aimed at them a bit.
Most likely as a part of the usual arguments-as-soldiers political dynamic.
I do think that there’s an actual argument to be made that we have much less empirical evidence regarding AIs compared to Ponzis, and plently of people on both sides of this debate are far too overconfident in their grand theories, EY very much included.