I viewed this as nitpicking a claim that’s not super central. It felt indicative of a general pattern amongst rationalists of overconfident/overstated claims about civilizational inadequacy. I think often there are real problems in these cases, but they are kind of messy and typically not as big a deal as claimed.
I think it only has relevance to my views about AI alignment insofar as civilizational inadequacy is also relevant there. I don’t think the detailed claims about AC have much relevance to the general story about civilizational inadequacy but I agree they have some. I don’t think the prediction in this post has that much relevance to whether the OP was overstated but I agree it has some.
In my original comment I made a prediction for the OP that amounted to predicting a 33-43% difference for typical use. John is predicting a >50% difference under some particular conditions that look likely to be pretty similar to that. The reader can decide how significant that is.
I do think there’s something sort of like a silent evidence problem for civilizational inadequacy. Something resembling green rationalists. There’s a natural tendency for claims of inadequacy to offend someone, because there’s a claim that someone is doing something wrong. As a result, there’s a natural tendency for evidence and arguments for inadequacy to soften as they get passed along the social web. A tendency to preferentially fill in excuses rather than condemnations.
So I have a tendency to pay more attention to the pro-inadequacy pieces of evidence that make it to me, because I think they’re probably more like what the real world looks like under the hood, and somewhat ignore arguments to the effect that they’re not as big a failure as they first appear.
I viewed this as nitpicking a claim that’s not super central. It felt indicative of a general pattern amongst rationalists of overconfident/overstated claims about civilizational inadequacy. I think often there are real problems in these cases, but they are kind of messy and typically not as big a deal as claimed.
I think it only has relevance to my views about AI alignment insofar as civilizational inadequacy is also relevant there. I don’t think the detailed claims about AC have much relevance to the general story about civilizational inadequacy but I agree they have some. I don’t think the prediction in this post has that much relevance to whether the OP was overstated but I agree it has some.
In my original comment I made a prediction for the OP that amounted to predicting a 33-43% difference for typical use. John is predicting a >50% difference under some particular conditions that look likely to be pretty similar to that. The reader can decide how significant that is.
Makes sense.
I do think there’s something sort of like a silent evidence problem for civilizational inadequacy. Something resembling green rationalists. There’s a natural tendency for claims of inadequacy to offend someone, because there’s a claim that someone is doing something wrong. As a result, there’s a natural tendency for evidence and arguments for inadequacy to soften as they get passed along the social web. A tendency to preferentially fill in excuses rather than condemnations.
Self-consciousness wants to make everything about itself. It’s like the parable of the gullible king.
So I have a tendency to pay more attention to the pro-inadequacy pieces of evidence that make it to me, because I think they’re probably more like what the real world looks like under the hood, and somewhat ignore arguments to the effect that they’re not as big a failure as they first appear.
But such reasoning should be employed cautiously.