When I’m thinking about this, it seems kind of fine if the goalposts move—human strategic capacity will certainly move over time no matter what, right? Like, someone invented crowdfunding and suddenly we could do types of coordination that we previously couldn’t do.
It seems fine to me to have the goalposts moving, but then I think it’s important to trace through the implications of that.
Like, if the goalposts can move then this seems like perhaps the most obvious way out of the predicament; to keep the goalposts ever ahead of AI capabilities. But when I read your post I get the vibe that you’re not imagining this as a possibility?
I think it seems like a fine possibility in principle, actually; sorry to have given the wrong impression! It’s not my central hope, since strategy-stealing seems like it should make many human-augmentations “available” to AI systems as well. This is notably not true for things involving, e.g., BCIs or superbabies.
When I’m thinking about this, it seems kind of fine if the goalposts move—human strategic capacity will certainly move over time no matter what, right? Like, someone invented crowdfunding and suddenly we could do types of coordination that we previously couldn’t do.
It seems fine to me to have the goalposts moving, but then I think it’s important to trace through the implications of that.
Like, if the goalposts can move then this seems like perhaps the most obvious way out of the predicament; to keep the goalposts ever ahead of AI capabilities. But when I read your post I get the vibe that you’re not imagining this as a possibility?
I think it seems like a fine possibility in principle, actually; sorry to have given the wrong impression! It’s not my central hope, since strategy-stealing seems like it should make many human-augmentations “available” to AI systems as well. This is notably not true for things involving, e.g., BCIs or superbabies.