If I’m following your math correctly, it seems like this is a fully-general argument that it’s impossible to prevent any action with non-zero reward and non-zero cost of failure. I’m not really a math person, but it seems like something must be wrong with this argument because people fail to do things with non-zero reward and non-zero cost of failure all the time.
It also seems suspicious that your equation has no term for the cost of getting caught breaking the hypothetical anti-AI law/international norm.
I appreciate your feedback and I’ll change the math to account for the cost of getting caught, since I think that’s a significant oversight on my part. A few subtle distinctions I want to point out.
First: I said corporations/companies, not people. Sorry if I didn’t clarify this. The reason behind the corporate framing instead of the personal framing is to motivate the reward to money relationship, and make the quantifiable argument more sound.
Second: AGI works differently. We’re concerned not with a rate of success but with success categorically, i.e. ‘who gets there first?’. This changes how we should look at prevention. With regular crime we focus on reducing the rate when we draft laws, but with AGI we focus on absolutely preventing the action from occurring. Thus, it must be not only upheld legally, but intrinsicly. It must be such that no company will ever participate in such a project, as it fundamentally goes against basic properties of corporate reward functions (sign, relative magnitude, dynamics, etc).
If I’m following your math correctly, it seems like this is a fully-general argument that it’s impossible to prevent any action with non-zero reward and non-zero cost of failure. I’m not really a math person, but it seems like something must be wrong with this argument because people fail to do things with non-zero reward and non-zero cost of failure all the time.
It also seems suspicious that your equation has no term for the cost of getting caught breaking the hypothetical anti-AI law/international norm.
I appreciate your feedback and I’ll change the math to account for the cost of getting caught, since I think that’s a significant oversight on my part. A few subtle distinctions I want to point out.
First: I said corporations/companies, not people. Sorry if I didn’t clarify this. The reason behind the corporate framing instead of the personal framing is to motivate the reward to money relationship, and make the quantifiable argument more sound.
Second: AGI works differently. We’re concerned not with a rate of success but with success categorically, i.e. ‘who gets there first?’. This changes how we should look at prevention. With regular crime we focus on reducing the rate when we draft laws, but with AGI we focus on absolutely preventing the action from occurring. Thus, it must be not only upheld legally, but intrinsicly. It must be such that no company will ever participate in such a project, as it fundamentally goes against basic properties of corporate reward functions (sign, relative magnitude, dynamics, etc).