here is some evidence for my hypothesis. It’s weak because the platform really encourages users with un-made-up minds to have their mind made up for them.
tldw: youtuber JREG presents his position as explicitly anti-AI-welfare, because in the future he expects
“I, as an armed being, will need to amputate my arms to get the superior robot arms, because there’s no reason for me to have the flesh-and blood arms anymore”—this alongside a meme -
“the minimal productive burden evermore unreachable by an organic mind”
He doesn’t deny the possibility of future AI suffering. He expects humans to be supplanted by AI, and that by trying to anticipate their moral status, we are allocating resources and rights to beings that aren’t and may never become moral patients, and thereby diminishing the share of resources and strength-of-rights of actual moral patients.
here is some evidence for my hypothesis. It’s weak because the platform really encourages users with un-made-up minds to have their mind made up for them.
tldw: youtuber JREG presents his position as explicitly anti-AI-welfare, because in the future he expects
“I, as an armed being, will need to amputate my arms to get the superior robot arms, because there’s no reason for me to have the flesh-and blood arms anymore”—this alongside a meme -
“the minimal productive burden evermore unreachable by an organic mind”
He doesn’t deny the possibility of future AI suffering. He expects humans to be supplanted by AI, and that by trying to anticipate their moral status, we are allocating resources and rights to beings that aren’t and may never become moral patients, and thereby diminishing the share of resources and strength-of-rights of actual moral patients.
None of this necessarily reflects my opinion