Why wouldn’t the answer be normal software or a normal AI (non-AGI)?
Especially as, I expect that even if one is an oracle, such things will be easier to design, implement and control than AGI.
(Edited) The first link was very interesting, but lost me at “maybe the a model instantiation notices its lack of self-reflective coordination” because this sounds like something that the (non-self-aware, non-self-reflective) model in the story shouldn’t be able to do. Still, I think it’s worth reading and the conclusion sounds...barely, vaguely, plausible. The second link lost me because it’s just an analogy; it doesn’t really try to justify the claim that a non-agentic AI actually is like an ultra-death-ray.
If you ask oracle AGI “What code should I execute to achieve goal X?” the result, with very high probability, is agentic AGI.
You can read this and this
Why wouldn’t the answer be normal software or a normal AI (non-AGI)?
Especially as, I expect that even if one is an oracle, such things will be easier to design, implement and control than AGI.
(Edited) The first link was very interesting, but lost me at “maybe the a model instantiation notices its lack of self-reflective coordination” because this sounds like something that the (non-self-aware, non-self-reflective) model in the story shouldn’t be able to do. Still, I think it’s worth reading and the conclusion sounds...barely, vaguely, plausible. The second link lost me because it’s just an analogy; it doesn’t really try to justify the claim that a non-agentic AI actually is like an ultra-death-ray.