Note that we made all our code available (https://github.com/PalisadeResearch/shutdown_avoidance/) and it’s pretty easy to run exactly our code but with your prompt, if you’d like to avoid writing your own script. You need a docker runtime and a python environment with the dependencies installed (and it’s easier if you use nix though that’s not required), but if you have those things you just have to modify the conf.py file with your prompt and then do something like run --model openai/o3.
There are lots of ways for systems to malfunction, but only a small number of those involve actively / surprisingly disabling a shutdown mechanism without any instruction to do so. If the issue was that it got the math problems wrong, that seems like a run-of-the-mill malfunction; if it rewrites the shutdown program, that seems like a much more interesting and specific failure mode, and the word “sabotage” seems appropriate.
I have read the paper, that one is modeling AI on the basis of rational humans, and then comparing it. Which is kind of worth doing if you start with the assumption that AI systems are modeled on rational behavior of humans. I kind of think that it should only come up if any model can directly learn from experience, generalize across domains, and take rational / computed decisions based on what it learned from the environment and interaction, like humans do. (this is not as strongly held and i dont have a generic definition of AGI yet).
Rewriting the shutdown problem is a much more interesting failure mode. The reason sabotage is over the top is because it implies deliberate and malicious intent. You can justify both in this case, but I still feel it proves too much (need to find Scott’s post to link here sorry). This is not just semantics. It’s a willful act often premeditated. For humans, if you misunderstand and shut down power supply to a city, it is an accident. If you do it as instructed, it’s instruction following. If you do with an intent to damage something to knowingly hurt something or someone, it will be called sabotage. It’s a tendency to attribute human traits to a machine and only works iff we assume there is a desire. It helps make it more relatable, just that it carries equal chances of misunderstanding as well. Similar words could be anomaly (too vague), cascading process failure, configuration drift, bug (shutdown prevention bug), error, etc. which communicate the same thing without the relatability aspect.
A counter case i can think of where this does not apply: a model somehow learns and actively ‘hides’ behavior during testing, but deploying it in a live environment. Even here intent would originate from human trainer building the model.
Note that we made all our code available (https://github.com/PalisadeResearch/shutdown_avoidance/) and it’s pretty easy to run exactly our code but with your prompt, if you’d like to avoid writing your own script. You need a docker runtime and a python environment with the dependencies installed (and it’s easier if you use nix though that’s not required), but if you have those things you just have to modify the
conf.py
file with your prompt and then do something likerun --model openai/o3
.There are lots of reasons that a “survival drive” is something we were interested in testing; one reason is that self-preservation has been suggested as a “convergent instrumental goal”; see https://selfawaresystems.com/wp-content/uploads/2008/01/ai_drives_final.pdf for a (fairly dated, but seminal) discussion of this.
There are lots of ways for systems to malfunction, but only a small number of those involve actively / surprisingly disabling a shutdown mechanism without any instruction to do so. If the issue was that it got the math problems wrong, that seems like a run-of-the-mill malfunction; if it rewrites the shutdown program, that seems like a much more interesting and specific failure mode, and the word “sabotage” seems appropriate.
I have read the paper, that one is modeling AI on the basis of rational humans, and then comparing it. Which is kind of worth doing if you start with the assumption that AI systems are modeled on rational behavior of humans. I kind of think that it should only come up if any model can directly learn from experience, generalize across domains, and take rational / computed decisions based on what it learned from the environment and interaction, like humans do. (this is not as strongly held and i dont have a generic definition of AGI yet).
Rewriting the shutdown problem is a much more interesting failure mode. The reason sabotage is over the top is because it implies deliberate and malicious intent. You can justify both in this case, but I still feel it proves too much (need to find Scott’s post to link here sorry). This is not just semantics. It’s a willful act often premeditated. For humans, if you misunderstand and shut down power supply to a city, it is an accident. If you do it as instructed, it’s instruction following. If you do with an intent to damage something to knowingly hurt something or someone, it will be called sabotage. It’s a tendency to attribute human traits to a machine and only works iff we assume there is a desire. It helps make it more relatable, just that it carries equal chances of misunderstanding as well. Similar words could be anomaly (too vague), cascading process failure, configuration drift, bug (shutdown prevention bug), error, etc. which communicate the same thing without the relatability aspect.
A counter case i can think of where this does not apply: a model somehow learns and actively ‘hides’ behavior during testing, but deploying it in a live environment. Even here intent would originate from human trainer building the model.