With sufficient predictive capability, nobody needs to be oppressed, at least not in the usual sense. They will just find themselves nudged down the path that lets them be satisfied without much harming others.
I’m probably more comfortable with this future than most. I think that it’s an interesting question, how we should relate emotionally to being predicted and optimized for.
And of course for AI that isn’t able to model humans well enough to notice and ignore/correct bad behavior, yes obviously we shouldn’t give a bunch of unstable high-leverage power to lots of randos. But it still could be reasonable to give relativlely stable, well-aggregated power to the masses.
So it seems that incipient AI need a protected environment to develop into one capable of reliably carrying out such activities. Much like raising children in protected environments before adulthood.
With sufficient predictive capability, nobody needs to be oppressed, at least not in the usual sense. They will just find themselves nudged down the path that lets them be satisfied without much harming others.
I’m probably more comfortable with this future than most. I think that it’s an interesting question, how we should relate emotionally to being predicted and optimized for.
And of course for AI that isn’t able to model humans well enough to notice and ignore/correct bad behavior, yes obviously we shouldn’t give a bunch of unstable high-leverage power to lots of randos. But it still could be reasonable to give relativlely stable, well-aggregated power to the masses.
So it seems that incipient AI need a protected environment to develop into one capable of reliably carrying out such activities. Much like raising children in protected environments before adulthood.