That is assuming that we are capable of programming a strong AI to do any one thing instead of another, but if we cannot do that then the entire discussion seems to me to be moot.
And therein lies the rub. Current research-grade AGI formalisms don’t actually allow us to specifically program the agent for anything, not even paperclips.
If I was unclear, I was intending that remark to apply to the original hypothetical scenario where we do have a strong AI and are trying to use it to find a critical path to a highly optimal world. In the real world we obviously have no such capability. I will edit my earlier remark for clarity.
And therein lies the rub. Current research-grade AGI formalisms don’t actually allow us to specifically program the agent for anything, not even paperclips.
If I was unclear, I was intending that remark to apply to the original hypothetical scenario where we do have a strong AI and are trying to use it to find a critical path to a highly optimal world. In the real world we obviously have no such capability. I will edit my earlier remark for clarity.