Thinking further about my own question, it would depend on whether one values not just QALYs, but confidence that one had indeed bought some number of QALYs
In other words, it depends whether you donate to help people, or to make yourself feel good.
The function U need not be based on what a third party thinks they should be. Donating to make oneself feel good is a perfectly rational reason, provided one values the warm fuzzy feelings more than the money.
Fair enough, the argument does not hold in that case. If you are donating to make yourself feel good then you should diversify.
However, if you are donating to make yourself feel good, i.e. if you value confidence as well as QALYs, then your preference relation is no longer given by U, as this implies that you care differently depending on whether you bought the QALYs or someone else did, so your preferences is not a function solely of the number of antelope and the number of babies.
The only qualification of U is that it’s values map to my preferences and that it has transitive values, such that if U(a1,b1)>U(a2,b2)>U(a3,b3), then U(a3,b3)<U(a1,b1). There is no requirement that the arguments of U be measured in terms of dollars- the arguments could easily be the non-real sum of the monies provided by others and the monies provided by me.
U is a function of the number of antelope and the number of babies. By the law of transpancy, it doesn’t care whether there are 100 antelope because you saved them or because someone else did. If you do care, then your preference function cannot be described as a function on this domain.
As defined in the original post, U is a function of the total amount of money given to charities A and B. There is no restriction that more money results in more antelope or babies saved, nor that the domain of the function is limited to positive real numbers.
Or are you saying that if I care about whether I help do something important, then my preferences must be non-transitive?
He writes U(A, B), where A is the number of antelope saved and B is the number of babies saved. If you care about anything other than the number of antelope saved or the number of babies saved then U does not completely describe your preferences. Caring about whether you save the antelope or someone else does counts as caring about something other than the number of antelope saved. Unless you can exhibit a negative baby or a complex antelope, then you must accept this domain is limited to positive numbers.
He later gets, from U, a function from the amount of money given, strictly speaking this is a completely different function, it is only denoted by U for convenience. However, the fact that U was initially defined in the previous way means it may have constraints other than transitivity.
To give an example, let f be any function on the real numbers. f, currently has no constraints. We can make f into a function of vectors by saying f(x) = f(|x|), but it is not a fully general function of vectors, it has a constraint that it must satisfy, namely that it is constant on the surface of any sphere surrounding the origin.
Fair cop- I was mistaken about the definition of U.
If there is no function U(a,b) which maps to my preferences across the region which I have control, then the entire position of the original post is void.
Yes, I do not think we actually have a disagreement. The rule that you shouldn’t diversify only applies if your aim is to help people. It obviously doesn’t apply for all possible aims, as it is possible to imagine an agent with a terminal value for diversified charitable donations.
More specifically, it only applies if your goal is to help people, and your donation is not enough to noticeably change the marginal returns on investment.
Only if you assume that you can’t easily self modify.
If you’re trying to optimize how you feel instead of something out there in the territory, then you’re wireheading. If you’re going to wirehead, then do it right and feel good without donating.
If you aren’t going to wirehead, then realize that you aren’t actually being effective, and self modify so that you feel good when you maximize expected QALYs instead
How I feel IS real. The judgments about the value of my feelings are mostly consistent and transitive, and I choose not to change how my perceptions effect my feelings except for good reasons.
In other words, it depends whether you donate to help people, or to make yourself feel good.
The function U need not be based on what a third party thinks they should be. Donating to make oneself feel good is a perfectly rational reason, provided one values the warm fuzzy feelings more than the money.
Fair enough, the argument does not hold in that case. If you are donating to make yourself feel good then you should diversify.
However, if you are donating to make yourself feel good, i.e. if you value confidence as well as QALYs, then your preference relation is no longer given by U, as this implies that you care differently depending on whether you bought the QALYs or someone else did, so your preferences is not a function solely of the number of antelope and the number of babies.
The only qualification of U is that it’s values map to my preferences and that it has transitive values, such that if U(a1,b1)>U(a2,b2)>U(a3,b3), then U(a3,b3)<U(a1,b1). There is no requirement that the arguments of U be measured in terms of dollars- the arguments could easily be the non-real sum of the monies provided by others and the monies provided by me.
U is a function of the number of antelope and the number of babies. By the law of transpancy, it doesn’t care whether there are 100 antelope because you saved them or because someone else did. If you do care, then your preference function cannot be described as a function on this domain.
As defined in the original post, U is a function of the total amount of money given to charities A and B. There is no restriction that more money results in more antelope or babies saved, nor that the domain of the function is limited to positive real numbers.
Or are you saying that if I care about whether I help do something important, then my preferences must be non-transitive?
He writes U(A, B), where A is the number of antelope saved and B is the number of babies saved. If you care about anything other than the number of antelope saved or the number of babies saved then U does not completely describe your preferences. Caring about whether you save the antelope or someone else does counts as caring about something other than the number of antelope saved. Unless you can exhibit a negative baby or a complex antelope, then you must accept this domain is limited to positive numbers.
He later gets, from U, a function from the amount of money given, strictly speaking this is a completely different function, it is only denoted by U for convenience. However, the fact that U was initially defined in the previous way means it may have constraints other than transitivity.
To give an example, let f be any function on the real numbers. f, currently has no constraints. We can make f into a function of vectors by saying f(x) = f(|x|), but it is not a fully general function of vectors, it has a constraint that it must satisfy, namely that it is constant on the surface of any sphere surrounding the origin.
Fair cop- I was mistaken about the definition of U.
If there is no function U(a,b) which maps to my preferences across the region which I have control, then the entire position of the original post is void.
Yes, I do not think we actually have a disagreement. The rule that you shouldn’t diversify only applies if your aim is to help people. It obviously doesn’t apply for all possible aims, as it is possible to imagine an agent with a terminal value for diversified charitable donations.
More specifically, it only applies if your goal is to help people, and your donation is not enough to noticeably change the marginal returns on investment.
Okay, yes, thats true
Billionaires feel free to ignore the OP
Along with people who donate to small causes.
Although, it still applies to them to the extent that you should not donate to a small cause if there is a big cause that offers better returns.
Only if you assume that you can’t easily self modify.
If you’re trying to optimize how you feel instead of something out there in the territory, then you’re wireheading. If you’re going to wirehead, then do it right and feel good without donating.
If you aren’t going to wirehead, then realize that you aren’t actually being effective, and self modify so that you feel good when you maximize expected QALYs instead
How I feel IS real. The judgments about the value of my feelings are mostly consistent and transitive, and I choose not to change how my perceptions effect my feelings except for good reasons.