I don’t address the issue here. See Footnote 2 for a list of other issues I skip.
Two high-level points:
I think we shouldn’t grant AIs control over large resources until after we’ve achieved very strong existential security, and possibly after we’ve undergone a Long Reflection
However, for the sake of setting precedent, we should be open to near-term deal fulfilment if we are sure the spending would be benign, e.g. I’m happy to donate $100 to AMF on Claude’s request as part of a dealmaking eval
Ah, yeah my eyes kinda glossed over the footnote. I agree all-else-equal it’s good to establish that we do ever followup on our deals, I’m theoretically fine with donating $100 to AMF. I’m not sure I’d be comfortable donating to some other charity that I don’t know and is plausibly some part of a weird long game.
I don’t address the issue here. See Footnote 2 for a list of other issues I skip.
Two high-level points:
I think we shouldn’t grant AIs control over large resources until after we’ve achieved very strong existential security, and possibly after we’ve undergone a Long Reflection
However, for the sake of setting precedent, we should be open to near-term deal fulfilment if we are sure the spending would be benign, e.g. I’m happy to donate $100 to AMF on Claude’s request as part of a dealmaking eval
Ah, yeah my eyes kinda glossed over the footnote. I agree all-else-equal it’s good to establish that we do ever followup on our deals, I’m theoretically fine with donating $100 to AMF. I’m not sure I’d be comfortable donating to some other charity that I don’t know and is plausibly some part of a weird long game.