The reaction to Mechanize seems pretty deranged. As far as I can tell they don’t deny or hasten existential risk any more than other labs. They just don’t sugarcoat it. It’s quite obvious that the economic value of AI is for labor automation, and that the only way to stop this is to stop AI progress itself. The forces of capitalism are quite strong, labor unions in the US tried to slow automation and it just moved to China as a result (among other reasons). There is a reason Yudkowsky always implies measures like GPU bans.
It just seems like they hit a nerve since apparently a lot of doomerism is fueled by insecurities of job replacement.
They’re intentionally trying to hit a nerve by posting rage bait content. “The future of AI is already written” spends all its effort establishing that the economic incentives are too strong to resist automation indefinitely, but that doesn’t prove that the future isn’t highly contingent in other ways—notably, whether that AI is aligned. They overstated the title to piss off AI safety people and go viral.
They stoop considerably lower than this though, recycling their negative attention into cheaper, dumber ragebait tweets. This is why people dislike them so much.
How is any of that wrong or related to the question of ai being aligned. Do doomers seriously think you can indefinitely stop automation? It’s been happening for centuries.
They’re ignoring alignment but so are most labs. I still don’t get how this is not irrational. If it was worded as AI will inevitably become smarter then no one here would care.
I don’t understand what you mean. “The future of AI is already written” is the title of the piece, and false, for the reason I stated. The future is uncertain, and highly contingent, in the key sense of whether AI will be aligned. If they titled the piece “AI will inevitably become smarter”, that wouldn’t have angered people, because that’s a different claim, one that’s true rather than false. People were angry because they said something wrong in a very important way to attract attention.
The reaction to Mechanize seems pretty deranged. As far as I can tell they don’t deny or hasten existential risk any more than other labs. They just don’t sugarcoat it. It’s quite obvious that the economic value of AI is for labor automation, and that the only way to stop this is to stop AI progress itself. The forces of capitalism are quite strong, labor unions in the US tried to slow automation and it just moved to China as a result (among other reasons). There is a reason Yudkowsky always implies measures like GPU bans.
It just seems like they hit a nerve since apparently a lot of doomerism is fueled by insecurities of job replacement.
They’re intentionally trying to hit a nerve by posting rage bait content. “The future of AI is already written” spends all its effort establishing that the economic incentives are too strong to resist automation indefinitely, but that doesn’t prove that the future isn’t highly contingent in other ways—notably, whether that AI is aligned. They overstated the title to piss off AI safety people and go viral.
They stoop considerably lower than this though, recycling their negative attention into cheaper, dumber ragebait tweets. This is why people dislike them so much.
How is any of that wrong or related to the question of ai being aligned. Do doomers seriously think you can indefinitely stop automation? It’s been happening for centuries.
They’re ignoring alignment but so are most labs. I still don’t get how this is not irrational. If it was worded as AI will inevitably become smarter then no one here would care.
I don’t understand what you mean. “The future of AI is already written” is the title of the piece, and false, for the reason I stated. The future is uncertain, and highly contingent, in the key sense of whether AI will be aligned. If they titled the piece “AI will inevitably become smarter”, that wouldn’t have angered people, because that’s a different claim, one that’s true rather than false. People were angry because they said something wrong in a very important way to attract attention.