A certain philosophy being the most sustainable and positive isn’t automatically the same as being the one people tend to adopt
I think there is more than ample evidence to suggest that those are significantly less likely to be adopted—however wouldn’t a group of people who know that and can correct for it be the best test case of implementing an optimized strategy?
Also, it sounds like you’re still talking about a situation where people don’t actually have ultimate power.
I hold the view that it is unnecessary to hold ultimate power over FAI. I certainly wouldn’t bind it to what has worked for humans thus far. Don’t fear the AI, find a way to assimilate.
I think there is more than ample evidence to suggest that those are significantly less likely to be adopted—however wouldn’t a group of people who know that and can correct for it be the best test case of implementing an optimized strategy?
I hold the view that it is unnecessary to hold ultimate power over FAI. I certainly wouldn’t bind it to what has worked for humans thus far. Don’t fear the AI, find a way to assimilate.