That’s also because this is a simplified example, merely intended to provide a counter-example to your original assertion.
As I’ve stated before, no AI can predict its own decisions in that sense (i.e. in detail, before it has made them.) Knowing its source code doesn’t help; it has to run the code in order to know what result it gets.
Agreed, it isn’t an intelligent action, but if you start saying intelligent agents can only take intelligent decisions, then you’re playing No True Scotsman.
I can imagine plenty of situations where someone might want to design an agent that takes certain unintelligent decisions in certain circumstances, or an agent that self-modifies in that way. If an agent can not only make promises, but also formally prove by showing it’s own source code that those promises are binding and that it can’t change them - then it may be at an advantage for negociations and cooperation over an agent that can’t do that.
So “stupid” decisions that can be predicted by reading one’s own source code isn’t a feature that I consider unlikely in the design-space of AIs.
I would agree with that. But I would just say that the AI would experience doing those things (for example keeping such promises) as we experience reflex actions, not as decisions.
That’s also because this is a simplified example, merely intended to provide a counter-example to your original assertion.
Agreed, it isn’t an intelligent action, but if you start saying intelligent agents can only take intelligent decisions, then you’re playing No True Scotsman.
I can imagine plenty of situations where someone might want to design an agent that takes certain unintelligent decisions in certain circumstances, or an agent that self-modifies in that way. If an agent can not only make promises, but also formally prove by showing it’s own source code that those promises are binding and that it can’t change them - then it may be at an advantage for negociations and cooperation over an agent that can’t do that.
So “stupid” decisions that can be predicted by reading one’s own source code isn’t a feature that I consider unlikely in the design-space of AIs.
I would agree with that. But I would just say that the AI would experience doing those things (for example keeping such promises) as we experience reflex actions, not as decisions.