A certain philosophy being the most sustainable and positive isn’t automatically the same as being the one people tend to adopt. Plus the answer to your question depends on what you’re trying to optimize.
Also, it sounds like you’re still talking about a situation where people don’t actually have ultimate power. If we’re discussing a potential hard takeoff scenario, then considerations such as “which models have been the most successful for businesses before” don’t really apply. Any entity genuinely undergoing a hard takeoff is one that isn’t afterwards bound by what’s successful for humans, any more than we are bound by the practices that work the best for ants.
A certain philosophy being the most sustainable and positive isn’t automatically the same as being the one people tend to adopt
I think there is more than ample evidence to suggest that those are significantly less likely to be adopted—however wouldn’t a group of people who know that and can correct for it be the best test case of implementing an optimized strategy?
Also, it sounds like you’re still talking about a situation where people don’t actually have ultimate power.
I hold the view that it is unnecessary to hold ultimate power over FAI. I certainly wouldn’t bind it to what has worked for humans thus far. Don’t fear the AI, find a way to assimilate.
A certain philosophy being the most sustainable and positive isn’t automatically the same as being the one people tend to adopt. Plus the answer to your question depends on what you’re trying to optimize.
Also, it sounds like you’re still talking about a situation where people don’t actually have ultimate power. If we’re discussing a potential hard takeoff scenario, then considerations such as “which models have been the most successful for businesses before” don’t really apply. Any entity genuinely undergoing a hard takeoff is one that isn’t afterwards bound by what’s successful for humans, any more than we are bound by the practices that work the best for ants.
I think there is more than ample evidence to suggest that those are significantly less likely to be adopted—however wouldn’t a group of people who know that and can correct for it be the best test case of implementing an optimized strategy?
I hold the view that it is unnecessary to hold ultimate power over FAI. I certainly wouldn’t bind it to what has worked for humans thus far. Don’t fear the AI, find a way to assimilate.