“I hereby precommit to make my decisions regarding whether or not to blackmail an individual independent of the predicted individual-specific result of doing so.”
I’m afraid your username nailed it. This algorithm is defective. It just doesn’t work for achieving the desired goal.
Two can play that game.
The problem is that this isn’t the same game. A precommitment not be successfully blackmailed is qualitatively different from a precommitment to attempt to blackmail people for whom blackmail doesn’t work. “Precomittment” (or behaving as if you made all the appropriate precomittments in accordance with TDT/UDT) isn’t as simple as proving one is the most stubborn and dominant and thereby claiming the utility.
Evaluating extortion tactics while distributing gains from a trade is somewhat complicated. But it gets simple and unambiguous is when the extortive tactics rely on the extorter going below their own Best Alternative to Negotiated Agreement. Those attempts should just be ignored (except in some complicated group situations in which the other extorted parties are irrational in certain known ways).
“I am willing to accept 0 gain for both of us unless I earn 90% of the shared profit” is different to “I am willing to actively cause 90 damage to each of us unless you give me 60″ which is different again to “I ignore all threats which involve the threatener actively harming themselves”.
What I think is being ignored is that the question isn’t ‘what is the result of these combinations of commitments after running through all the math?’. We can talk about precommitment all day, but the fact of the matter is that humans can’t actually precommit. Our cognitive architectures don’t have that function. Sure, we can do our very best to act as though we can, but under sufficient pressure there are very few of us whose resolve will not break. It’s easy to convince yourself of having made an inviolable precommitment when you’re not actually facing e.g. torture.
We can talk about precommitment all day, but the fact of the matter is that humans can’t actually precommit.
If you define the bar high enough, you can conclude that humans can’t do anything.
In the real world outside my head, I observe that people have varying capacities to keep promises to themselves. That their capacity is finite does not mean that it is zero.
We can talk about precommitment all day, but the fact of the matter is that humans can’t actually precommit.
Pre-commitment isn’t even necessary. Note that the original explanation didn’t include any mention of it. Later replies only used the term for the sake of crossing an inferential gap (ie. allowing you to keep up). However, if you are going to make a big issue of the viability of precommitment itself you need to first understand that the comment you are replying to isn’t one.
That wasn’t a Causal Decision Theorist attempting to persuade someone that it has altered itself internally or via an external structure such that it is “precommited” to doing something irrational. It is a Timeless Decision Theorist saying what happens to be rational regardless of any previous ‘commitments’.
ur cognitive architectures don’t have that function. Sure, we can do our very best to act as though we can, but under sufficient pressure there are very few of us whose resolve will not break.
I’m aware of the vulnerability of human brains, so is Eliezer. In fact the vulnerability of human gatekeepers to influence even by humans, much less super-intelligences is something Eliezer made huge deal about demonstrating. However this particular threat isn’t a vulnerability of Eliezer or myself or any of the others who made similar observations. If you have any doubt that we would destroy the AI you have a poor model of reality.
It’s easy to convince yourself of having made an inviolable precommitment when you’re not actually facing e.g. torture.
For practical purposes I assume that I can be modified by torture such that I’ll do or say just about anything. I do not expect the tortured me to behave the way the current me would decide and so my current decisions take that into account (or would, if it came to it). However this scenario doesn’t involve me being tortured. It involves something about an AI simulating torture of some folks. That decision is easy and doesn’t cripple my decision making capability.
As I pointed out in another thread, “irrational behavior” can have the effect of precommitting. For instance, people “irrationally” drive at a cost of more than $X to save $X on an item. Precommitting to buying the cheapest product even if it costs you money for transportation means that stores are forced to compete with far distant stores, thus lowering their prices more than they would otherwise. But you (and consumers in general) have to be able to precommit to do that. You can’t just change your mind and buy at the local store when the local store refuses to compete, raises its price, and is still the better deal because it saves you on driving costs.
So the fact that you will pay more than $X in driving costs to save $X can be seen as a form of precommitting, in the scenario where you precommitted to following the worse option.
I’m afraid your username nailed it. This algorithm is defective. It just doesn’t work for achieving the desired goal.
The problem is that this isn’t the same game. A precommitment not be successfully blackmailed is qualitatively different from a precommitment to attempt to blackmail people for whom blackmail doesn’t work. “Precomittment” (or behaving as if you made all the appropriate precomittments in accordance with TDT/UDT) isn’t as simple as proving one is the most stubborn and dominant and thereby claiming the utility.
Evaluating extortion tactics while distributing gains from a trade is somewhat complicated. But it gets simple and unambiguous is when the extortive tactics rely on the extorter going below their own Best Alternative to Negotiated Agreement. Those attempts should just be ignored (except in some complicated group situations in which the other extorted parties are irrational in certain known ways).
“I am willing to accept 0 gain for both of us unless I earn 90% of the shared profit” is different to “I am willing to actively cause 90 damage to each of us unless you give me 60″ which is different again to “I ignore all threats which involve the threatener actively harming themselves”.
What I think is being ignored is that the question isn’t ‘what is the result of these combinations of commitments after running through all the math?’. We can talk about precommitment all day, but the fact of the matter is that humans can’t actually precommit. Our cognitive architectures don’t have that function. Sure, we can do our very best to act as though we can, but under sufficient pressure there are very few of us whose resolve will not break. It’s easy to convince yourself of having made an inviolable precommitment when you’re not actually facing e.g. torture.
If you define the bar high enough, you can conclude that humans can’t do anything.
In the real world outside my head, I observe that people have varying capacities to keep promises to themselves. That their capacity is finite does not mean that it is zero.
Pre-commitment isn’t even necessary. Note that the original explanation didn’t include any mention of it. Later replies only used the term for the sake of crossing an inferential gap (ie. allowing you to keep up). However, if you are going to make a big issue of the viability of precommitment itself you need to first understand that the comment you are replying to isn’t one.
That wasn’t a Causal Decision Theorist attempting to persuade someone that it has altered itself internally or via an external structure such that it is “precommited” to doing something irrational. It is a Timeless Decision Theorist saying what happens to be rational regardless of any previous ‘commitments’.
I’m aware of the vulnerability of human brains, so is Eliezer. In fact the vulnerability of human gatekeepers to influence even by humans, much less super-intelligences is something Eliezer made huge deal about demonstrating. However this particular threat isn’t a vulnerability of Eliezer or myself or any of the others who made similar observations. If you have any doubt that we would destroy the AI you have a poor model of reality.
For practical purposes I assume that I can be modified by torture such that I’ll do or say just about anything. I do not expect the tortured me to behave the way the current me would decide and so my current decisions take that into account (or would, if it came to it). However this scenario doesn’t involve me being tortured. It involves something about an AI simulating torture of some folks. That decision is easy and doesn’t cripple my decision making capability.
As I pointed out in another thread, “irrational behavior” can have the effect of precommitting. For instance, people “irrationally” drive at a cost of more than $X to save $X on an item. Precommitting to buying the cheapest product even if it costs you money for transportation means that stores are forced to compete with far distant stores, thus lowering their prices more than they would otherwise. But you (and consumers in general) have to be able to precommit to do that. You can’t just change your mind and buy at the local store when the local store refuses to compete, raises its price, and is still the better deal because it saves you on driving costs.
So the fact that you will pay more than $X in driving costs to save $X can be seen as a form of precommitting, in the scenario where you precommitted to following the worse option.