This. Combine this fact with the non-trivial chance that moral values are subjective, not objective, and there is little good reason to be doing alignment.
While human moral values are subjective, there is a sufficiently large shared amount that you can target at aligning an AI to that. As well, values held by a majority (ex: caring for other humans, enjoying certain fun things) are also essentially shared. Values that are held by smaller groups can also be catered to.
If humans were sampled from the entire space of possible values, then yes we (maybe) couldn’t build an AI aligned to humanity, but we only take up a relatively small space and have a lot of shared values.
Not really, I do want to make an AGI, primarily because I have very much the want to have a singularity, as it represents hope to me, and I have very different priors than Eliezer or MIRI about how much we’re doomed.
So you think that, since morals are subjective, there is no reason to try to make an effort to control what happens after the singularity? I really don’t see how that follows.
This. Combine this fact with the non-trivial chance that moral values are subjective, not objective, and there is little good reason to be doing alignment.
While human moral values are subjective, there is a sufficiently large shared amount that you can target at aligning an AI to that. As well, values held by a majority (ex: caring for other humans, enjoying certain fun things) are also essentially shared. Values that are held by smaller groups can also be catered to.
If humans were sampled from the entire space of possible values, then yes we (maybe) couldn’t build an AI aligned to humanity, but we only take up a relatively small space and have a lot of shared values.
So do you think that instead we should just be trying to not make an AGI at all?
Not really, I do want to make an AGI, primarily because I have very much the want to have a singularity, as it represents hope to me, and I have very different priors than Eliezer or MIRI about how much we’re doomed.
So you think that, since morals are subjective, there is no reason to try to make an effort to control what happens after the singularity? I really don’t see how that follows.