In principle, I guess you could also think about low-tech solutions. For example, people who want to opt out of alcohol might have some slowly dissolving tattoo / dye placed somewhere on their hand or something. This would eliminate the need for any extra ID checks, but has the big disadvantage it would be visible most of the time.
Thanks. Are you able to determine what the typical daily dose is for implanted disulfiram in Eastern Europe? People who take oral disulfiram typically need something like 0.25g / day to have a significant physiological effect. However, most of the evidence I’ve been able to find (e.g. this paper) suggest that the total amount of disulfiram in implants is around 1g. If that’s dispensed over a year, you’re getting like 1% of the dosage that’s active orally. On top of that, the evidence seems pretty strong that bioavailability from implants is lower than from oral doses, so it’s effectively even less.
Of course, there’s nothing stopping someone implanting 100x as large a dose, and maybe bioavailability can be improved (or isn’t that big a concern). But if not, my impression was that most implants are effectively pure placebo effect.
Very interesting! Do you know how much disulfiram the implant gives out per day? There’s a bunch of papers on implants, but there’s usually concerns about (a) that the dosage might be much smaller than the typical oral dosage and/or (b) that there’s poor absorption.
I specified (right before the first graph) that I was using the US standard of 14g. (I know the paper uses 10g. There’s no conflict because I use their raw data which is in g, not drinks.)
Ironically, there is no standard for what a “standard drink” is, with different countries defining it to be anything from 8g to 20g of ethanol.
I wasn’t (intentionally?) being ironic. I guess that for underage drinking we have the advantage that you can sort of guess how old someone looks, but still… good point.
I’ve politely contacted them several times via several different channels just asking for clarifications and what the “missing coefficients” are in the last model. Total stonewall- they won’t even acknowledge my contacts. Some people more connected to the education community also apparently did that as a result of my post, with the same result.
You could model the two as being totally orthogonal:
Rationality is the art of figuring out how to get what you want.
Utilitarianism is a calculus for figuring out what you should want.
In practice, I think the dividing lines are more blurry. Also, the two tend to come up together because people who are attracted to the thinking in one of these tend to be attracted to the other as well.
You definitely need a number of data at least exponential in the number of parameters, since the number of “bins” is exponential. (It’s not so simple as to say that exponential is enough because it depends on the distributional overlap. If there are cases where one group never hits a given bin, then even an infinite amount of data doesn’t save you.)
I see what you’re saying, but I was thinking of a case where there is zero probability of having overlap among all features. While that technically restores the property that you can multiply the dataset by arbitrarily large numbers, if feels a little like “cheating” and I agree with your larger point.
I guess Simpson’s paradox does always have a right answer in “stratify along all features”, it’s just that the amount of data you need increases exponentially in the number of relevant features. So I think that in the real world you can multiply the amount of data by a very, very large number and it won’t solve the problem, even though in a large enough number will.
In the real world it’s often also sort of an open question if the number of “features” is finite or not.
I like your concept that the only “safe” way to use utilitarianism is if you don’t include new entities (otherwise you run into trouble). But I feel like they have to be included in some cases. E.g. If I knew that getting a puppy would make me slightly happier, but the puppy would be completely miserable, surely that’s the wrong thing to do?
(PS thank you for being willing to play along with the unrealistic setup!)
This covers a really impressive range of material—well done! I just wanted to point out that if someone followed all of this and wanted more, Shannon’s 1948 paper is surprisingly readable even today and is probably a nice companion:
Well, it would be nice if we happened to live in a universe where we could all agree on an agent-neutral definition of what the best actions to take in each situation are. It seems to be that we don’t live in such a universe, and that our ethical intuitions are indeed sort of arbitrarily created by evolution. So I agree we don’t need to mathematically justify these things (and maybe it’s impossible) but I wish we could!
If I understand your second point, you’re suggesting that part of our intuition seems to suggest large populations are better is that larger populations tend to make the average utility higher. I like that! It would be interesting to try to estimate at that human population level average utility would be highest. (In hunter/gatherer or agricultural times probably very low levels. Today probably a lot higher?)
Can you clarify which answer you believe is the correct one in the puppy example? Or, even better, the current utility for the dog in the “yes puppy” example is 5-- for what values you believe it is correct to have or not have the puppy?