This is actually the first writing from Altman I’ve ever read in full, because I find him entirely untrustworthy, so perhaps there’s a style shock hitting me here. Maybe he just always writes like an actual cult leader. But damn, it was so much worse than I expected.
Very little has made me more scared of AI in the last ~year than reading Sam Altman try to convince me that “the singularity will probably just be good and nice by default and the hard problems will just get solved as a matter of course.”
Something I feel is missing from your criticism, and also from most responses to anything Altman says, is “What mechanism in your singularity-seeking plans is there to prevent you, Sam Altman CEO of OpenAI, from literally gaining control of the entire human race and/or planet earth on the other side of singularity?”
I would ask this question because while it’s obvious that a likely outcome is the disempowerment of all humans, another well know fear of AI is enabling indestructible autocracies through unprecedented power imbalances. If OpenAI’s CEO can personally instruct a machine god to manipulate public opinion in ways we’ve never even conceptualized before, how do we not slide into an eternity of hyper-feudalism almost immediately?
He is handwaving away the threat of disempowerment in his official capacity as one of the few people on earth who could end up absorbing all that power. For me to personally make the statements he made would be merely stupid, but for OpenAI’s CEO to make them is terrifying.
I guess I don’t know if disempowerment-by-a-god-emperor is really worse than disempowerment-without-a-god-emperor, but my overall fear of disempowerment is raised by his obvious incentive to hide that outcome.
For me to personally make the statements he made would be merely stupid, but for OpenAI’s CEO to make them is terrifying.
I think these statements would be terrifying coming from any arbitrary CEO, but Sam Altman in particular has a track record of manipulating people and squashing safety concerns for the specific goal of giving himself more power, and successfully thwarting attempts to reduce his power.
This is actually the first writing from Altman I’ve ever read in full, because I find him entirely untrustworthy, so perhaps there’s a style shock hitting me here. Maybe he just always writes like an actual cult leader. But damn, it was so much worse than I expected.
Very little has made me more scared of AI in the last ~year than reading Sam Altman try to convince me that “the singularity will probably just be good and nice by default and the hard problems will just get solved as a matter of course.”
Something I feel is missing from your criticism, and also from most responses to anything Altman says, is “What mechanism in your singularity-seeking plans is there to prevent you, Sam Altman CEO of OpenAI, from literally gaining control of the entire human race and/or planet earth on the other side of singularity?”
I would ask this question because while it’s obvious that a likely outcome is the disempowerment of all humans, another well know fear of AI is enabling indestructible autocracies through unprecedented power imbalances. If OpenAI’s CEO can personally instruct a machine god to manipulate public opinion in ways we’ve never even conceptualized before, how do we not slide into an eternity of hyper-feudalism almost immediately?
He is handwaving away the threat of disempowerment in his official capacity as one of the few people on earth who could end up absorbing all that power. For me to personally make the statements he made would be merely stupid, but for OpenAI’s CEO to make them is terrifying.
I guess I don’t know if disempowerment-by-a-god-emperor is really worse than disempowerment-without-a-god-emperor, but my overall fear of disempowerment is raised by his obvious incentive to hide that outcome.
I think these statements would be terrifying coming from any arbitrary CEO, but Sam Altman in particular has a track record of manipulating people and squashing safety concerns for the specific goal of giving himself more power, and successfully thwarting attempts to reduce his power.
Also, it’s not like these ideas are new to him or he hasn’t thought about them before. See the Musk v. Altman emails, ctrl-f “AGI dictatorship”