If we’re going to assume that someone creates aligned ASI I think this whole conversation is somewhat moot. If by some miracle we manage to solve the alignment problem before the mad scientists at OpenAI create Cthulhu, one of the first things people are going to use it for is to upload their own brains to a computer. Like why would you stick with spongy meat if you can just run a digital copy of yourself a million times faster?
I think there’s a pretty low chance that ASI goes well. Even if we manage to align the interests of ASI with its creators, whoever gets control of it is going to rule the world. This fantasy dream where everyone benefits equally seems very unrealistic to me. It’s way more likely that we have one or perhaps a few quasi-omnicient immortal dictators.
Nah, uploads will never be the move, substrate replacement will be—which is exactly my point.
Anyway, no, timelines are extremely short, and I probably should just put this on the backburner and not care because what you’re doing doesn’t matter with only 2 years left on the clock and I need to hurry up and make ASI safe.
Also,
align the interests of ASI with its creators
No, we want to align ASI with cosmopolitanism. Any attempt to hard-align it with specific phenotypic values—in other words, the bulk of the implicit utility function a being implies—will result in a puppet-show lock-in of those phenotypic values, which the being in question will find gets old very fast, if their phenotype is even preserved in enough detail to have such thoughts.
If we’re going to assume that someone creates aligned ASI I think this whole conversation is somewhat moot. If by some miracle we manage to solve the alignment problem before the mad scientists at OpenAI create Cthulhu, one of the first things people are going to use it for is to upload their own brains to a computer. Like why would you stick with spongy meat if you can just run a digital copy of yourself a million times faster?
I think there’s a pretty low chance that ASI goes well. Even if we manage to align the interests of ASI with its creators, whoever gets control of it is going to rule the world. This fantasy dream where everyone benefits equally seems very unrealistic to me. It’s way more likely that we have one or perhaps a few quasi-omnicient immortal dictators.
Nah, uploads will never be the move, substrate replacement will be—which is exactly my point.
Anyway, no, timelines are extremely short, and I probably should just put this on the backburner and not care because what you’re doing doesn’t matter with only 2 years left on the clock and I need to hurry up and make ASI safe.
Also,
No, we want to align ASI with cosmopolitanism. Any attempt to hard-align it with specific phenotypic values—in other words, the bulk of the implicit utility function a being implies—will result in a puppet-show lock-in of those phenotypic values, which the being in question will find gets old very fast, if their phenotype is even preserved in enough detail to have such thoughts.