I think these are all still pretty bad. For example if there are human uploads but no stronger AI, that will lead to horrors (“data can’t defend itself”—Sam Hughes). If there are biological superbrains, same. Look at what we humans did with our intelligence, we’ve introduced a horror (factory farms and fish farms) that surpasses anything in nature. Going forward we must take two steps in morality for every step in capability, otherwise horrors will increase proportionally.
The physical world with all its problems is kind of a marvel, in that it allows a degree of individualism, locality, a speed limit. One can imagine that it was engineered by creatures from a more nasty world, who one day said: you know what, let’s build a system where creatures fundamentally cannot be hurt and erased at a distance, or at least it’s much harder to do so. That’s just a metaphor, but to me the holy grail of AI safety would be building the next world in this metaphorical chain. Where we’d still have to live by the sweat of our brow, but where the guardrails would be a little bit stronger than in our world.
For example, one could imagine a world where game theory is less of a cruel master than it is in ours, where “no need for game theory” is written into the base bricks of existence. A world that is a challenge but not a battleground. Designing and building such a world would be a monumentally difficult task, but that’s what superintelligence is for.
I think these are all still pretty bad. For example if there are human uploads but no stronger AI, that will lead to horrors (“data can’t defend itself”—Sam Hughes). If there are biological superbrains, same. Look at what we humans did with our intelligence, we’ve introduced a horror (factory farms and fish farms) that surpasses anything in nature. Going forward we must take two steps in morality for every step in capability, otherwise horrors will increase proportionally.
The physical world with all its problems is kind of a marvel, in that it allows a degree of individualism, locality, a speed limit. One can imagine that it was engineered by creatures from a more nasty world, who one day said: you know what, let’s build a system where creatures fundamentally cannot be hurt and erased at a distance, or at least it’s much harder to do so. That’s just a metaphor, but to me the holy grail of AI safety would be building the next world in this metaphorical chain. Where we’d still have to live by the sweat of our brow, but where the guardrails would be a little bit stronger than in our world.
For example, one could imagine a world where game theory is less of a cruel master than it is in ours, where “no need for game theory” is written into the base bricks of existence. A world that is a challenge but not a battleground. Designing and building such a world would be a monumentally difficult task, but that’s what superintelligence is for.