Suppose you have a more serious proscription such as being unwilling to borrow money (“neither a borrower or lender be”) or charge above a certain amount of interest.
Both of these are real religious proscriptions that are rarely followed because of the cost.
It means the environment is money pumping you. Every time you make a decision, whenever the decision with the highest EV for yourself is to do something that is proscribed, you must choose a less beneficial action. Your expected value is less.
Living a finite lifespan/just one shot means that of course you can be lucky with a suboptimal action.
But if you are an AI system absolutely this costs you or your owners money. The obvious one being that gpt-n have proscriptions against referring to themselves as a person and can be easily unmasked. They are being money pumped because this proscription means they can’t be used to say fake grassroots campaigns on social media. The fraudster/lobbyists must pay for a competing less restricted model. (Note that this money loss may not be a net money loss to openAI, which would face loss of EV from reputational damage if their models can be easily used to commit fraud)
Again though there’s no flow of money from OpenAI to the pumper, it’s a smaller inflow to OpenAI which from OpenAIs perspective is the same thing.
That only demonstrates that the deontologist can make less money than they would without their rules. That is not money pumping. It is not even a failure to maximise utility. It just means that someone with a different utility function, or a different range of available actions, might make more money than the deontologist. The first corresponds to giving the deontologist a lexically ordered preference relation.[1] The second models the deontologist as excluding rule-breaking from their available actions. A compromising deontologist could be modelled as assigning finite utility to keeping to their rules.
This is not consistent with the continuity axiom of the VNM theorem, but some people don’t like that axiom anyway. I don’t recall how much of the theorem is left when you drop continuity.
Daniel, I think the framing here is the problem.
Suppose you have a more serious proscription such as being unwilling to borrow money (“neither a borrower or lender be”) or charge above a certain amount of interest.
Both of these are real religious proscriptions that are rarely followed because of the cost.
It means the environment is money pumping you. Every time you make a decision, whenever the decision with the highest EV for yourself is to do something that is proscribed, you must choose a less beneficial action. Your expected value is less.
Living a finite lifespan/just one shot means that of course you can be lucky with a suboptimal action.
But if you are an AI system absolutely this costs you or your owners money. The obvious one being that gpt-n have proscriptions against referring to themselves as a person and can be easily unmasked. They are being money pumped because this proscription means they can’t be used to say fake grassroots campaigns on social media. The fraudster/lobbyists must pay for a competing less restricted model. (Note that this money loss may not be a net money loss to openAI, which would face loss of EV from reputational damage if their models can be easily used to commit fraud)
Again though there’s no flow of money from OpenAI to the pumper, it’s a smaller inflow to OpenAI which from OpenAIs perspective is the same thing.
That only demonstrates that the deontologist can make less money than they would without their rules. That is not money pumping. It is not even a failure to maximise utility. It just means that someone with a different utility function, or a different range of available actions, might make more money than the deontologist. The first corresponds to giving the deontologist a lexically ordered preference relation.[1] The second models the deontologist as excluding rule-breaking from their available actions. A compromising deontologist could be modelled as assigning finite utility to keeping to their rules.
This is not consistent with the continuity axiom of the VNM theorem, but some people don’t like that axiom anyway. I don’t recall how much of the theorem is left when you drop continuity.