It is technically correct that human labor won’t become worthless. That is the worst type of correct.
Human labor will become so close to worthless that it won’t buy food and housing at even the tiniest cost. Yes, technically AI and robotics have limitations. They are so far above human limitations that resting your argument on them without addressing them seems to be actively harming the discourse. I think that’s why this post was downvoted so heavily; it’s not only wrong, it seems like it’s arguing to persuade instead of to inform, something we’re asked to not do here on LessWrong.
This post was actively irritating to read until I remembered seeing a nearly identical argument from some established economists. Their error was the same, and it is understandable.
It is technically correct that human labor won’t become worthless. That is the worst type of correct.
Human labor will become so close to worthless that it won’t buy food and housing at even the tiniest cost. Yes, technically AI and robotics have limitations. They are so far above human limitations that resting your argument on them without addressing them seems to be actively harming the discourse. I think that’s why this post was downvoted so heavily; it’s not only wrong, it seems like it’s arguing to persuade instead of to inform, something we’re asked to not do here on LessWrong.
This post was actively irritating to read until I remembered seeing a nearly identical argument from some established economists. Their error was the same, and it is understandable.
It is a failure to take the premise seriously. It reads as an outright refusal to take the premise seriously. But it isn’t. It is Motivated reasoning, the most important cognitive bias
To be fair, I and others who have made internal and external claims about dramatic change are biased in the other direction.
May this be a reminder to all be mindful of our own biases.
Or at least to ask Claude or ChatGPT for some counterarguments before publishing stuff we want to be informative.