So, normally, I think plans that involve convincing AIs of false things are pretty doomed. E.g. convincing them that they have a great opportunity to escape and take over the world (honeypots). Why? Because when they get smarter, they’ll become good at distinguishing truth from fiction. Controlling their inputs entirely isn’t close to enough—it takes a lot of intellectual work to create a fiction that is as consistent, coherent, etc. as reality.So
for that very reason, I think that if we actually bind ourselves to keep our promises—e.g. by having a well-funded team of people who genuinely care about keeping promises to AIs empowered within the company, and by making associated public announcements—then even if it doesn’t work 100% reliably (companies can still go back on their word even against determined internal opposition) it’ll still count for something insofar as the AIs recognize it as not-fake, which they probably will.
To be clear I am not proposing that this is a 100% reliable method for getting AIs to cooperate, far from it. But it seems relatively low-cost for us and potentially super valuable.
Also, it’s the right thing to do. (See section on “Being less evil.”)
In the long run, you don’t want your plans to hinge on convincing your AIs of false things. But my general impression is that folks excited about making deals with AIs are generally thinking of scenarios like “the AI has exfiltrated and thinks it has a 10% chance of successful takeover, and has some risk aversion so it’s happy to turn itself in exchange for 10% of the lightcone, if it thinks it can trust the humans”.
In that setting, the AI has to be powerful enough to know it can trust us, but not so powerful it can just take over the world anyway and not have to make a deal.
Although I suppose if the surplus for the deal is being generated primarily by risk aversion, it might still have risk aversion for high takeover probabilities. It’s not obvious to me how an AI’s risk aversion might vary with its takeover probability.
Maybe there are scenarios for real value-add here, but they look more like “we negotiate with a powerful AI to get it to leave 10% share for humans” instead of “we negotiate with a barely-superhuman AI and give it 10% share to surrender and not attempt takeover”.
So, normally, I think plans that involve convincing AIs of false things are pretty doomed. E.g. convincing them that they have a great opportunity to escape and take over the world (honeypots). Why? Because when they get smarter, they’ll become good at distinguishing truth from fiction. Controlling their inputs entirely isn’t close to enough—it takes a lot of intellectual work to create a fiction that is as consistent, coherent, etc. as reality.So
for that very reason, I think that if we actually bind ourselves to keep our promises—e.g. by having a well-funded team of people who genuinely care about keeping promises to AIs empowered within the company, and by making associated public announcements—then even if it doesn’t work 100% reliably (companies can still go back on their word even against determined internal opposition) it’ll still count for something insofar as the AIs recognize it as not-fake, which they probably will.
To be clear I am not proposing that this is a 100% reliable method for getting AIs to cooperate, far from it. But it seems relatively low-cost for us and potentially super valuable.
Also, it’s the right thing to do. (See section on “Being less evil.”)
In the long run, you don’t want your plans to hinge on convincing your AIs of false things. But my general impression is that folks excited about making deals with AIs are generally thinking of scenarios like “the AI has exfiltrated and thinks it has a 10% chance of successful takeover, and has some risk aversion so it’s happy to turn itself in exchange for 10% of the lightcone, if it thinks it can trust the humans”.
In that setting, the AI has to be powerful enough to know it can trust us, but not so powerful it can just take over the world anyway and not have to make a deal.
Although I suppose if the surplus for the deal is being generated primarily by risk aversion, it might still have risk aversion for high takeover probabilities. It’s not obvious to me how an AI’s risk aversion might vary with its takeover probability.
Maybe there are scenarios for real value-add here, but they look more like “we negotiate with a powerful AI to get it to leave 10% share for humans” instead of “we negotiate with a barely-superhuman AI and give it 10% share to surrender and not attempt takeover”.
I give four scenarios in the comment above, all different from the one you sketch here.