Well, I agree that I chose words badly and then didn’t explain the intended meaning, continued to speak in metaphors (my writing skills are seriously lacking). What I called “personality” of a delegate was a function that assigns a utility score for any given state of the world (at the beginning they are determined by moral theories). In my first post I thought about these utility function as constants and stayed that way throughout negotiation process (it was my impression that ESRogs 3rd assumption implicitly says basically the same thing), maybe accepting some binding agreements if they help to increase the expected utility (these agreements are not treated as a part of utility function, they are ad-hoc).
On the other hand, what if we drop the assumption that these utility functions stay constant? What if, e.g. when two delegates meet, instead of exchanging binding agreements to vote in a specific way, they would exchange agreements to self-modify in a specific way that would correspond to those agreements? I.e. suppose a delegate M_1 strongly prefers option O_1,1 to an option O_1,2 on an issue B_1 and slightly prefers O_2,1 to O_2,2 on an issue B_2, whereas a delegate M_2 strongly prefers option O_2,2 to an option O_2,1 on an issue B_2 and slightly prefers O_1,2 to O_1,1 on an issue B_1. Now, M_1 could agree to vote (O_1,1 ;O_2,2) in exchange for a promise that M_2 would vote the same way, and sign a binding agreement. On the other hand, M_1 could agree to self-modify to slightly prefer O_2,2 to O_2,1 in exchange for a promise that M_2 would self-modify to slightly prefer O_1,1 to O_1,2 (both want to self-modify as little as possible, however any modification that is not ad-hoc would probably affect utility function at more than one point (?). Self-modifying in this case is restricted (only utility function is modified), therefore maybe it wouldn’t require heavy machinery (I am not sure), besides, all utility functions ultimately belong to the same persons). These self-modifications are not binding agreements, delegates are allowed to further self-modify their “personalities”(i.e. utility functions) in another exchange.
Now, this idea vaguely reminds me a smoothing over the space of all possible utility functions. Metaphorically, this looks as if delegates were “persuaded” to change their “personalities”, their “opinions about things”(i.e. utility functions) by an “argument” (i.e. exchange).
I would guess these self-modifying delegates should be used as dummy variables during a finite negotiation process. After the vote, delegates would revert to their original utility functions.
Well, I agree that I chose words badly and then didn’t explain the intended meaning, continued to speak in metaphors (my writing skills are seriously lacking). What I called “personality” of a delegate was a function that assigns a utility score for any given state of the world (at the beginning they are determined by moral theories). In my first post I thought about these utility function as constants and stayed that way throughout negotiation process (it was my impression that ESRogs 3rd assumption implicitly says basically the same thing), maybe accepting some binding agreements if they help to increase the expected utility (these agreements are not treated as a part of utility function, they are ad-hoc).
On the other hand, what if we drop the assumption that these utility functions stay constant? What if, e.g. when two delegates meet, instead of exchanging binding agreements to vote in a specific way, they would exchange agreements to self-modify in a specific way that would correspond to those agreements? I.e. suppose a delegate M_1 strongly prefers option O_1,1 to an option O_1,2 on an issue B_1 and slightly prefers O_2,1 to O_2,2 on an issue B_2, whereas a delegate M_2 strongly prefers option O_2,2 to an option O_2,1 on an issue B_2 and slightly prefers O_1,2 to O_1,1 on an issue B_1. Now, M_1 could agree to vote (O_1,1 ;O_2,2) in exchange for a promise that M_2 would vote the same way, and sign a binding agreement. On the other hand, M_1 could agree to self-modify to slightly prefer O_2,2 to O_2,1 in exchange for a promise that M_2 would self-modify to slightly prefer O_1,1 to O_1,2 (both want to self-modify as little as possible, however any modification that is not ad-hoc would probably affect utility function at more than one point (?). Self-modifying in this case is restricted (only utility function is modified), therefore maybe it wouldn’t require heavy machinery (I am not sure), besides, all utility functions ultimately belong to the same persons). These self-modifications are not binding agreements, delegates are allowed to further self-modify their “personalities”(i.e. utility functions) in another exchange.
Now, this idea vaguely reminds me a smoothing over the space of all possible utility functions. Metaphorically, this looks as if delegates were “persuaded” to change their “personalities”, their “opinions about things”(i.e. utility functions) by an “argument” (i.e. exchange).
I would guess these self-modifying delegates should be used as dummy variables during a finite negotiation process. After the vote, delegates would revert to their original utility functions.