Unless I’m missing something, this seems to disregard the possibility of deception. Or it handwaves deception away in a line or two.
The type of person to end up as the CEO of a leading AI company is likely (imo) someone very experienced in deception and manipulation- at the very least through experiencing others trying it on them, even if by some ridiculously unlikely chance they haven’t used deception to gain power themselves.
A clever, seemingly logically sound argument for them to slow down and trust that their competitor will also slow down because of the argument, will ring all kinds of bells.
I think whistleblower protections, licenses, enforceable charters, mandatory 3rd party safety evals, etc have a much higher chance of working.
Unless I’m missing something, this seems to disregard the possibility of deception. Or it handwaves deception away in a line or two.
The type of person to end up as the CEO of a leading AI company is likely (imo) someone very experienced in deception and manipulation- at the very least through experiencing others trying it on them, even if by some ridiculously unlikely chance they haven’t used deception to gain power themselves.
A clever, seemingly logically sound argument for them to slow down and trust that their competitor will also slow down because of the argument, will ring all kinds of bells.
I think whistleblower protections, licenses, enforceable charters, mandatory 3rd party safety evals, etc have a much higher chance of working.