Yeah, this is a good point. The way I’ve put it before is: when you are thinking about what should happen, you’re basically imagining you have some sort of magic wand that makes it happen. But how powerful is the magic wand? I haven’t thought this through to my satisfaction, so for now I’m just going based on intuitive notions of what is actually realistically achievable.
But one way of trying to define the limits of the “magic wand” here would be: You get to magically choose a policy to be adopted, but you don’t get to magically control people’s behavior afterwards. So if you want to get people to limit AI uses, your policy needs to deal with their potential incentives to do otherwise.
This means, IIUC, that the answer to your final question is “yes”. But it’s more a matter of perceived incentives here, IMO, see: https://therealartificialintelligence.substack.com/p/following-the-incentives > If someone believes that it will be hard to make international agreements to stop AI because countries will have incentives against this, does that mean that those considerations now fall under “incentives” and thus count for purpose of determining whether stopping is “hard”?
But one way of trying to define the limits of the “magic wand” here would be: You get to magically choose a policy to be adopted, but you don’t get to magically control people’s behavior afterwards. So if you want to get people to limit AI uses, your policy needs to deal with their potential incentives to do otherwise.
That makes sense to an extent. If I can summarize my understanding of your point, for purposes of understanding how enforceable a policy is, we assume that the policy is implemented and than analyze enforcement. We want to do this to decompose the difficulty of implementing the policy from the question of post-implementation enforcement. Assuming that both stopping and regulating were implemented, you give reasons to believe that regulation would be harder to enforce. Is that correct?
The part I don’t understand is how that relates to your conclusions at the end:
Note that “Stopping AI is too hard, we need to regulate it in a different way instead” is not on the list.
Where the list is of “the coherent points of view available”. I don’t think this follows because something can be “hard” for non-enforcement reasons. So someone can coherently believe that regulation has non-enforcement advantages and enforcement-related disadvantages, with the advantages outweighing the disadvantages (relative to stopping). This seems entirely coherent to me (which isn’t to say that I agree with it).
If the statement I quote above has an implicit (only as it relates to enforcement) attached, then I don’t really understand what it means beyond the fact that if someone accepts your argument, they are in fact accepting your argument. The conclusion becomes almost tautological, such that it doesn’t really seem to relate to coherence to me (because someone who disagrees probably disagrees with an earlier step in your argument).
Yeah, this is a good point. The way I’ve put it before is: when you are thinking about what should happen, you’re basically imagining you have some sort of magic wand that makes it happen. But how powerful is the magic wand? I haven’t thought this through to my satisfaction, so for now I’m just going based on intuitive notions of what is actually realistically achievable.
But one way of trying to define the limits of the “magic wand” here would be: You get to magically choose a policy to be adopted, but you don’t get to magically control people’s behavior afterwards. So if you want to get people to limit AI uses, your policy needs to deal with their potential incentives to do otherwise.
This means, IIUC, that the answer to your final question is “yes”. But it’s more a matter of perceived incentives here, IMO, see: https://therealartificialintelligence.substack.com/p/following-the-incentives
> If someone believes that it will be hard to make international agreements to stop AI because countries will have incentives against this, does that mean that those considerations now fall under “incentives” and thus count for purpose of determining whether stopping is “hard”?
That makes sense to an extent. If I can summarize my understanding of your point, for purposes of understanding how enforceable a policy is, we assume that the policy is implemented and than analyze enforcement. We want to do this to decompose the difficulty of implementing the policy from the question of post-implementation enforcement. Assuming that both stopping and regulating were implemented, you give reasons to believe that regulation would be harder to enforce. Is that correct?
The part I don’t understand is how that relates to your conclusions at the end:
Where the list is of “the coherent points of view available”. I don’t think this follows because something can be “hard” for non-enforcement reasons. So someone can coherently believe that regulation has non-enforcement advantages and enforcement-related disadvantages, with the advantages outweighing the disadvantages (relative to stopping). This seems entirely coherent to me (which isn’t to say that I agree with it).
If the statement I quote above has an implicit (only as it relates to enforcement) attached, then I don’t really understand what it means beyond the fact that if someone accepts your argument, they are in fact accepting your argument. The conclusion becomes almost tautological, such that it doesn’t really seem to relate to coherence to me (because someone who disagrees probably disagrees with an earlier step in your argument).