Is your argument that 10% is the expected loss, or that it’s plausible that you’d lose 10%?
I think >10% expected loss can probably be argued for, but giving a strong argument would involve going into the details of my state of moral/philosophical/empirical uncertainties and my resource allocations, and then considering various ways my uncertainties could be resolved (various possible combinations of philosophical and empirical outcomes), my expected loss in each scenario, and then averaging the losses. This is a lot of work, I’m a bit reluctant for privacy/signaling reasons, plus I don’t know if Paul would consider my understandings in this area to be state of the art (he didn’t answer my question as to what he thinks the state of the art is). So for now I’m pointing out that in at least some plausible scenarios the loss is at least 10%, and mostly just trying to understand why Paul thinks 10% expected loss is way too high rather than make a strong argument of my own.
I understood Paul to be arguing against 10% being the expected loss, in which case potentially making >10% changes in allocation doesn’t seem like a strong counterargument.
Does it help if I restated that as, I think that with high probability if I learned what my “true values” actually are, I’d make at least a 10% change in my resource allocations?
Does it help if I restated that as, I think that with high probability if I learned what my “true values” actually are, I’d make at least a 10% change in my resource allocations?
Yes, that’s clear.
So for now I’m pointing out that in at least some plausible scenarios the loss is at least 10%, and mostly just trying to understand why Paul thinks 10% expected loss is way too high rather than make a strong argument of my own.
I had an argument in mind that I thought Paul might be assuming, but on reflection I’m not sure it makes any sense (and so I update away from it being what Paul had in mind). But I’ll share it anyway in a child comment.
Suppose 1) you’re just choosing between spending and saving, 2) by default you’re going to allocate 50-50 to each, and 3) you know that there are X considerations, such that after you consider each one, you’ll adjust the ratio by 2:1 in one direction or the other.
If X is 1, then you expect to adjust the ratio by a factor of 2. If X is 10, you expect to adjust by a factor of sqrt(10)*2.
So, the more considerations there are that might affect the ratio, the more likely it is that you’ll end up with allocations close to 0% or 100%. And so, depending on how the realized value is related to the allocation ratio, skipping one of the considerations might not change the EV that much.
I think >10% expected loss can probably be argued for, but giving a strong argument would involve going into the details of my state of moral/philosophical/empirical uncertainties and my resource allocations, and then considering various ways my uncertainties could be resolved (various possible combinations of philosophical and empirical outcomes), my expected loss in each scenario, and then averaging the losses. This is a lot of work, I’m a bit reluctant for privacy/signaling reasons, plus I don’t know if Paul would consider my understandings in this area to be state of the art (he didn’t answer my question as to what he thinks the state of the art is). So for now I’m pointing out that in at least some plausible scenarios the loss is at least 10%, and mostly just trying to understand why Paul thinks 10% expected loss is way too high rather than make a strong argument of my own.
Does it help if I restated that as, I think that with high probability if I learned what my “true values” actually are, I’d make at least a 10% change in my resource allocations?
Yes, that’s clear.
I had an argument in mind that I thought Paul might be assuming, but on reflection I’m not sure it makes any sense (and so I update away from it being what Paul had in mind). But I’ll share it anyway in a child comment.
Potentially confused argument:
Suppose 1) you’re just choosing between spending and saving, 2) by default you’re going to allocate 50-50 to each, and 3) you know that there are X considerations, such that after you consider each one, you’ll adjust the ratio by 2:1 in one direction or the other.
If X is 1, then you expect to adjust the ratio by a factor of 2. If X is 10, you expect to adjust by a factor of sqrt(10)*2.
So, the more considerations there are that might affect the ratio, the more likely it is that you’ll end up with allocations close to 0% or 100%. And so, depending on how the realized value is related to the allocation ratio, skipping one of the considerations might not change the EV that much.