Karma: 135
• Let me try and piece together the sequence of events...

• Sherine launches a world-coup against Bayeswatch hegemony

• Vi and Miriam take out Sherine’s headquarters

• Bayeswatch nominally reestablishes control, but their vulnerability is known

• The hivemind starts building a military force

• Vi joins/​seizes the hivemind

• The singularity war starts. Vi controls the Bayeswatch-loyal side.

• Sherine-controlled Miriam kills Vi’s original body

• Vi’s forces lose the war for control of Earth’s surface but launch a von Neumann probe

Before drafting this I was wondering if Vi started the singularity war. But if she controlled the singularity and the Bayeswatch-loyal human forces, she probably would have won.

Unclear if Sherine a) is behind the singularity, b) sides with the singularity over Vi, c) thinks a third faction can defeat both Vi and the singularity, or d) figures humanity is doomed no matter what so she might as well get some revenge. Her em’s dying words suggest a).

• Just did this, and wanted to make it known at least one person is enjoying the D&D.Sci backlog.

And agreed the Doom thing is totally fair—I saw those anomalously high values and decided it wasn’t worth playing around with.

• It sounds like you adhere to a version of NFLS that only counts consequences as observable if you yourself can observe them in practice? So you can’t care about the far future if you don’t think you’ll live to see it? That seems pretty extreme if I’m interpreting it correctly.

• It’s very easy to read this as a call to mostly bring back the old philosophy of progress, despite what I recognize as attempts to avoid that reading.

My take is that a genuinely new philosophy of progress needs to transcend the old battle by positioning itself as heir to both sides. Increased understanding of the environmental and other costs of industrialization is no less a form of progress than new industrial technology. Environmentalists seeing industry as the enemy and industrialists seeing environmentalism as the enemy are both missing a larger picture.

In this vision, there would be Roots of Progress posts on topics like CFCs/​ozone layer and acid rain, or maybe broader things like “how we stopped dumping so much stuff in rivers”, without any sense that these posts are opposed to or in a different category from the rest. You could still discuss the disagreements around how to solve these issues, but even those judged completely wrong should not be cast as villains any more than proponents of “beating”-type threshing machines.

(I realize I’m sort of describing Mistake Theory. Mistake Theory being the philosophy of progress should be no surprise!)

• Thanks for doing this! Just seeing the concept makes me realize how subjective my assessments of sunburn risk are.

One thing I’ve been wondering lately is the effects of interrupted vs uninterrupted sun exposure. E.g., if I spend an hour outside, an hour inside, and then another hour outside, how does that compare to the effects of two continuous hours outside? I’ve tried a bit of googling, but the information is surprisingly hard to find.

What I have learned is that UV-induced DNA damage is mostly single-strand breaks that can be repaired via nucleotide excision repair, but I’m not sure how long that takes. I did find this on the simpler related process of base excision repair:

BER reactions in cells are extremely fast, and in many cases, an individual BER event may take only a few minutes (10,11). The repair of acute DNA damage requires several rounds of BER and can take several hours, as the amount of BER enzymes is limited.

To me that suggests a model where sun damage accumulates at a rate depending on exposure, is repaired at a fixed rate, and damage reaching a certain threshold triggers a sunburn.

• Jeff Nobbs (one of OP’s sources) says polyunsaturated fatty acids are the real culprit and provides a helpful chart. Tl;dr coconut oil is great, olive and avocado oil are pretty good, avoid canola/​peanut/​rice bran/​corn/​sunflower. (Sesame isn’t on the chart but IME it’s used in pretty small quantities anyway).

It’s hard to get much oil from whole versions of the source foods. My quick calculation say you can add ‘5 tbs soybean oil requires six blocks of tofu’.

• There’s a self-fulfilling prophecy aspect to this. If you expect to be judged for your transitive associations, you’ll choose them carefully. If you choose your transitive associations carefully, they’ll provide more Bayesian evidence about your values, making it more rational for others to judge you by them.

• Thanks for pulling all that data!

That study says third-generation Chinese-Americans—presumably the ones eating the most typically American diet—are actually slightly more obese than white Americans! At face value that pretty much torpedoes any genetic adaptation theory (and I have no particular reason not to take it at face value).

Theories 1 and 2 are both quite possible.

Re: Japan, it looks like soybean oil doesn’t dominate vegetable oil intake like in the US; rapeseed is more common and did not decline in the same way, and palm oil is also significant, so their overall trend in vegetable oil consumption isn’t so easy to eyeball. Though I think those numbers are consumption in the economic sense, not in the ‘eating’ sense—not sure how to account for that.

• Also, re: China being an outlier of high vegetable oil intake with low obesity, apparently soybean oil has been used there for millennia. Adaptation?

• It occurs to me that from a system robustness perspective, luxury is actually great, because it implies surplus capacity (assuming society can and will divert luxury-production to essentials-production in a crisis).

• The “best” values in KS are those that result when you optimize one player’s payoff under the constraint that the second player’s payoff is higher than the disagreement payoff.

I’m not sure this is the case? Wiki does say “It is assumed that the problem is nontrivial, i.e, the agreements in [the feasible set] are better for both parties than the disagreement”, but this is ambiguous as to whether they mean some or all. Googling further, I see graphs like this where non-Pareto-improvement solutions visibly do count.

I agree that your version seems more reasonable, but I think you lose monotonicity over the set of all policies, because a weak improvement to player 1′s payoffs could turn a (-1, 1000) point into a (0.1, 1000) point, make it able to affect the solution, and make the solution for player 1 worse. Though you’ll still have monotonicity over the restricted set of policies.

• First of all, this is awesome.

I didn’t know about KS bargaining before reading this, thinking through it now…

It seems kind of odd that terrible solutions like (1000, −10^100) could determine the outcome (I realize they can’t be the outcome, but still). I would hesitate to use KS bargaining unless I felt that values were in some sense ‘reasonable’ outcomes. Do you have a general sense of what a life of maximizing your spouse’s utility would look like (and vice versa)?

Trying to imagine this myself wrt my own partner, figuring out my utility function is a little tricky. The issue is that I think I have some concern for fairness baked in. Like, do I want my partner to do 100% of chores? My reaction is to say ‘no, that would be unfair, I don’t want to be unfair’. But if you’re referencing your utility function in a bargaining procedure to decide what ‘fair’ is, I don’t think that works. So, would I want my partner to do 100% of chores if that were fair? I can simulate that by imagining she offered to do this temporarily as part of a trade or bet and asking myself if I’d consider that a better deal than, say, her doing 75% of chores. And yes, yes I would. But I’d consider ‘she does 100% of chores no matter what, I’m not allowed to help’ a worse deal than ‘she does 100% of chores unless it becomes too costly to her’ for some definitions of ‘too costly’.

Assuming that my utility function is like that about most things, and that hers is as well, I’d say our values are actually reasonable counterfactuals to consider. Which inclines me to think yours are as well.

Still, ‘everything I do’ is a big solution space to make assumptions about. The Vow of Concord pretty much requires you to look for edge cases where your spouse’s utility can be increased by disproportionate sacrifices of yours; I’d suggest you start looking now (if you haven’t yet), before you’ve Vowed to let them guide your decisions.

• It makes a difference whether punishment is zero-sum or negative-sum. If we can’t take $100 from Bob to give to someone else but can only impose$100 of cost on him to no one’s benefit, we’d rather not do that.

In that case I think the answer is to forego the punishment if you’re sufficiently confident the harm is an inevitable result of a net-good decision.

• Since I first heard of controversy around ballot selfies, I’ve thought that an alternative to prosecuting those who take them would be to facilitate fake ballot selfies.

I was going to say you could implement this by letting people surrender a filled-out-but-not-submitted ballot to a poll worker in exchange for a new one, but you can probably already do this if you just say you made a mistake? In that case polling sites would just need to put posters up telling people to do this if they are under pressure of any kind to produce a ballot selfie.

• Do you have thoughts on pros and cons of this relative to progressive consumption tax? (I agree they’re mostly equivalent and both good).

I think consumption tax has an advantage in terms of perceived fairness in that it (almost) guarantees you won’t get years where e.g. Jeff Bezos pays literally zero taxes, which look pretty bad. Whereas these reforms could give you years where his taxes are highly negative, which would look worse.

• Hmm… I find the scaling aspect a bit fishy (maybe an ordinal vs cardinal utility issue?). The goodness of a proxy should be measured by the actions it guides, and a V-maximizer, a log(V) maximizer and an maximizer will all take the same actions (barring uncertain outcomes).

That said, reverse Goodhart remains possible. I’d characterize it as a matter of being below a proxy’s range of validity, whereas the more familiar Goodhart problem involves ending up above it. E.g. if V = + Y, then U = X is a reverse-Goodhart proxy for V—the higher X gets, the less you’ll lose (relatively) by neglecting Y. (Though we’d have to specify some assumptions about the available actions to make that a theorem).

An intuitive example might be a game with an expert strategy and a beginner strategy—‘skill at the expert strategy’ being a reverse-Goodhart proxy for skill at the game.

• A more general observation that I’m sure has been stated many times but clicked for me while reading this: Once you condition on the output of a prediction process, correlations are residuals. Positive/​negative/​zero coefficients then map not to good/​bad/​irrelevant but to underrated/​overrated/​valued accurately.

(“Which college a student attends” is the output of a prediction process insofar as diff students attend the most selective college that accepts them and colleges differ only in their admission cutoffs on a common scoring function, I think).

• Shorter statement of my answer:

The source of the apparent paradox here is that the perceived absurdity of ‘getting lucky N times in a row’ doesn’t scale linearly with N, which makes it unintuitive that an aggregation of ordinary evidence can justify an extraordinary belief.

You can get the same problem with less anthropic confusion by using coin-flip predictions instead of Russian Roulette. It seems weird that predicting enough flips successfully would force you to conclude that you can psychically predict flips, but that’s just a real and correct implication of having on nonzero prior on psychic abilities in the first place.