It seems fine to me to have the goalposts moving, but then I think it’s important to trace through the implications of that.
Like, if the goalposts can move then this seems like perhaps the most obvious way out of the predicament; to keep the goalposts ever ahead of AI capabilities. But when I read your post I get the vibe that you’re not imagining this as a possibility?
I think it seems like a fine possibility in principle, actually; sorry to have given the wrong impression! It’s not my central hope, since strategy-stealing seems like it should make many human-augmentations “available” to AI systems as well. This is notably not true for things involving, e.g., BCIs or superbabies.
It seems fine to me to have the goalposts moving, but then I think it’s important to trace through the implications of that.
Like, if the goalposts can move then this seems like perhaps the most obvious way out of the predicament; to keep the goalposts ever ahead of AI capabilities. But when I read your post I get the vibe that you’re not imagining this as a possibility?
I think it seems like a fine possibility in principle, actually; sorry to have given the wrong impression! It’s not my central hope, since strategy-stealing seems like it should make many human-augmentations “available” to AI systems as well. This is notably not true for things involving, e.g., BCIs or superbabies.