# Logarithms and Total Utilitarianism

Epistemic sta­tus: I might be rein­vent­ing the wheel here

A com­mon cause for re­jec­tion of to­tal util­i­tar­i­anism is that it im­plies the so-called Repug­nant Con­clu­sion, of which a lot has been writ­ten el­se­where. I will ar­gue that while this im­pli­ca­tion is solid in the­ory, it does not ap­ply in our cur­rent known uni­verse. My view is similar to the one ex­pressed here, but I try to give more de­tails.

## The Repug­nant Con­clu­sion IRL

The great­est rele­vance of the RC in prac­tice arises in situ­a­tions of scarce re­sources and Malthu­sian pop­u­la­tion traps¹: We com­pare pop­u­la­tion A, where there are few peo­ple with each one hav­ing plen­tiful re­sources, and pop­u­la­tion Z, which has grown from A un­til the av­er­age per­son lives in near-sub­sis­tence con­di­tions.

Let’s for­mal­ize this a bit: sup­pose each per­son re­quires 1 unit of re­sources for liv­ing, so that the util­ity of a per­son liv­ing on 1 re­sources is ex­actly 0: a com­pletely neu­tral life. Fur­ther­more, sup­pose util­ity is lin­ear w.r.t. re­sources: dou­bling re­sources means dou­bling util­ity and 10 re­sources cor­re­spond to 1 util­ity. If there are 100 re­sources in the world, pop­u­la­tion A might con­tain 10 peo­ple with 10 re­sources each and to­tal util­ity 10; pop­u­la­tion Z might con­tain 99 peo­ple with 10099 re­sources each and to­tal util­ity also 10.

So in this model, we are in­differ­ent be­tween A and Z even as ev­ery­one in Z is barely sub­sist­ing, and this would be the Repug­nant Con­clu­sion². But this con­clu­sion de­pends cru­cially on the re­la­tion­ship be­tween re­sources and util­ity which we have as­sumed to be lin­ear. What if our as­sump­tion is wrong? What is this re­la­tion­ship in the ac­tual world? Note that this is an em­piri­cal ques­tion³.

It is well known that self-re­ported hap­piness varies log­a­r­ith­mi­cally with in­come⁴, both be­tween coun­tries and for in­di­vi­d­u­als within each coun­try, so it seems rea­son­able to as­sume that the util­ity-re­sources re­la­tion is log­a­r­ith­mic: ex­po­nen­tial in­creases in re­sources bring lin­ear in­creases in util­ity.

Back to our model, as­sum­ing log util­ity, how do we now com­pare A and Z? If util­ity per per­son is where are the re­sources available to that per­son, then to­tal util­ity is . As­sum­ing equal­ity in the pop­u­la­tion (see the Equal­ity sec­tion), if are to­tal re­sources and is pop­u­la­tion size, each per­son has re­sources and so we have

We can plot to­tal util­ity (ver­ti­cal axis) as a func­tion of N (hori­zon­tal axis) for

Here we can see two ex­tremes of cero util­ity: at where there are no per­sons and at where each per­son lives with 1 re­sources, at sub­sis­tence level. In the mid­dle there is a sweet spot, and the max­i­mum M lies at around 37 peo­ple⁵.

Now we can an­swer our ques­tion! Pop­u­la­tion A, where is bet­ter than pop­u­la­tion Z where , but M is a su­pe­rior al­ter­na­tive to both.

So I have shown that there is a pop­u­la­tion M greater and bet­ter than A where ev­ery­one is worse off, how is that differ­ent from the RC? Well, the differ­ence is that this does not hap­pen for ev­ery pop­u­la­tion, but only for those where av­er­age well be­ing is rel­a­tively high. Fur­ther­more, the av­er­age in­di­vi­d­ual in M is far above sub­sis­tence.

## Equality

In my model I as­sumed an equal dis­tri­bu­tion of re­sources over the pop­u­la­tion, mainly to sim­plify the calcu­la­tions, but also be­cause un­der the log re­la­tion­ship and if the pop­u­la­tion is held con­stant, to­tal util­i­tar­i­anism en­dorses equal­ity. I will try to give an in­tu­ition for this and then a for­mal proof.

This graph rep­re­sents in­di­vi­d­ual util­ity (ver­ti­cal axis) vs in­di­vi­d­ual re­sources (hori­zon­tal axis). If there are two peo­ple, A and B, each hav­ing 2.5 and 7.5 re­sources re­spec­tively, we can re­al­lo­cate re­sources so that both now are at point M, with 5 each. Note that the in­crease in util­ity for A is 3, while the de­crease for B is a bit less than 2, so to­tal util­ity in­creases by more than 1.

This hap­pens no mat­ter where in the graph are A and B due to the prop­er­ties of the log func­tion. As long as there is a differ­ence in wealth you can in­crease to­tal util­ity by re­dis­tribut­ing re­sources equally.

For a for­mal proof, see ⁶.

## Implications

The main con­clu­sion I get from this is that al­though to­tal util­i­tar­i­anism is far from perfect, it might give good re­sults in prac­tice. The Repug­nant Con­clu­sion is not dead, how­ever. We can cer­tainly imag­ine some sen­tient aliens, AIs or an­i­mals whose util­ity func­tion is such that greater, worse-av­er­age-util­ity pop­u­la­tions end up be­ing bet­ter. But in this case, should we re­ally call it re­pug­nant? Could our in­tu­ition be fine-tuned for think­ing about hu­mans, and thus not ap­pli­ca­ble to those hy­po­thet­i­cal be­ings?

I don’t know to what ex­tent have oth­ers ex­plored the con­nec­tion be­tween to­tal util­i­tar­i­anism and equal­ity, but I was sur­prised when I re­al­ized that the former could im­ply the lat­ter. Of course, even if to­tal util­ity is all that mat­ters, it might not be pos­si­ble to reshuffle it among in­di­vi­d­u­als with com­plete liberty, which is the case in my model.

## Footnotes

1: One might con­sider other ways of con­trol­ling in­di­vi­d­ual util­ity in a pop­u­la­tion be­sides re­sources (e.g. mind de­sign, tor­ture...) but these seem less rele­vant to me.

2: Ac­tu­ally, in the origi­nal for­mu­la­tion Z is shown to be bet­ter than A, not just equally good.

3: As long as util­ity is well defined, that is. Here I will use self-re­ported hap­piness as a proxy for util­ity.

4: See the charts here

5: We can find the ex­act max­i­mum for any R with a bit of calcu­lus:

A nice prop­erty of this is that the ra­tio that max­i­mizes is con­stant for al (the ex­act con­stant ob­tained here is just due to the ar­bi­trary choice of base 10 for the log­a­r­ithms)

6: For a pop­u­la­tion of in­di­vi­d­u­als the dis­tri­bu­tion of re­sources which max­i­mizes to­tal util­ity is that where for all . The proof goes by in­duc­tion on .

This is ob­vi­ous in the case . For the in­duc­tion step, we can sep­a­rate a pop­u­la­tion of into two sets of and 1 in­di­vi­d­u­als re­spec­tively so that to­tal util­ity is . Sup­pose we al­lo­cate re­sources to the group of , and to the last per­son. By hy­poth­e­sis, each of the peo­ple must re­ceive re­sources to max­i­mize their to­tal util­ity so

Now we have to de­cide how much should be.

Solv­ing for :

There­fore, for each of the first in­di­vi­d­u­als and for the last one

No nominations.
No reviews.
• Look­ing at the math of di­vid­ing a fixed pool of re­sources among a non-fixed num­ber of peo­ple, a fea­ture of log(r) that mat­ters a lot is that log(0)<0. The first unit of re­sources that you give to a per­son is es­sen­tially wasted, be­cause it just gets them up to 0 util­ity (which is no bet­ter than just hav­ing 1 fewer per­son around).

That fa­vors hav­ing fewer peo­ple, so that you don’t have to keep wast­ing that first unit of re­source on each per­son. If the util­ity func­tion for a per­son in terms of their re­sources was f(r)=r-1 you would similarly find that it is best not to have too many peo­ple (in that case hav­ing ex­actly 1 per­son would work best).

Whereas if it was f(r)=sqrt(r) then it would be best to have as many peo­ple as pos­si­ble, be­cause you’re start­ing from 0 util­ity at 0 re­sources and sqrt is steep­est right near 0. Do­ing the calcu­la­tion… if you have R units of re­sources di­vided equally among N peo­ple, the to­tal util­ity is sqrt(RN). log(1+r) is similar to sqrt—it in­creases as N in­creases—but it is bounded if R is fixed and just ap­proaches that bound (if we use nat­u­ral log, that bound is just R).

To sum up: diminish­ing marginal util­ity fa­vors hav­ing more peo­ple each with fewer re­sources (in ad­di­tion to fa­vor­ing equal dis­tri­bu­tion of re­sources), f(0)<0 fa­vors hav­ing fewer peo­ple each with more re­sources (to avoid “wast­ing” the bit of re­sources that get a per­son up to 0 util­ity), and func­tions with both fea­tures like log(r) fa­vor some in­ter­me­di­ate solu­tion with a mod­er­ate pop­u­la­tion size.

• Note, that the key fea­ture of log func­tion used here is not its slow growth, but the fact that it takes nega­tive val­ues on small in­puts. For ex­am­ple, if we take the func­tion u(r)=log (r+1), so that u(0)=0, then RC holds.

Although there are also solu­tions that pre­vent RC with­out tak­ing nega­tive val­ues, e.g u(r) = exp{-1/​r}.

• In your lin­ear model, the hy­pothe­ses “the util­ity of a per­son liv­ing on 1 re­source is 0” and “dou­bling re­sources dou­bles util­ity” im­ply that util­ity is always 0. Maybe you meant the sec­ond hy­poth­e­sis to be “dou­bling the num­ber of re­sources be­yond the first dou­bles util­ity”, so that a per­son’s util­ity is 0.1 times the num­ber of re­sources be­yond the first. In this ver­sion of the lin­ear model, the to­tal util­ity of pop­u­la­tion A is 9 (0.9 util­ity per per­son), and the to­tal util­ity of pop­u­la­tion Z is 0.1 (ap­prox­i­mately 0.001 util­ity per per­son).

• This has shifted my views very pos­i­tively in fa­vor of to­tal log util­i­tar­i­anism, as it dis­solves quite cleanly the Repug­nant Con­clus­sion. Great post!

• Nice ex­am­ple! I still think the most rea­son­able ob­jec­tion to some­thing like to­tal util­i­tar­i­anism is that pop­u­la­tion ethics is a mat­ter of prefer­ences, and my prefer­ences are com­pli­cated. If hu­mans pre­fer to set aside part of the uni­verse as a na­ture pre­serve rather than de­vot­ing those re­sources to more hu­mans, then so be it—hu­man prefer­ences are the only prefer­ences we’ve got.

• The re­pug­nant con­clu­sion just says “a suffi­ciently large num­ber of lives barely worth liv­ing is prefer­able to a smaller num­ber of good lives”. It says noth­ing about re­sources; e.g., it doesn’t say that the suffi­ciently large num­ber can be at­tained by re­dis­tribut­ing a fixed sup­ply.

• Pre­sum­ably “if other things are equal” im­plies equal re­sources.

EDIT: The origi­nal state­ment by Parfit does not refer­ence any re­source con­straint ex­plic­itly, at least his origi­nal ex­am­ple of A → A+ → B cer­tainly does not seem to men­tion it. Nei­ther does the con­clu­sion that “any loss in the qual­ity of lives in a pop­u­la­tion can be com­pen­sated for by a suffi­cient gain in the quan­tity of a pop­u­la­tion.” Dis­claimer: I have not read the pri­mary sources.

• I think in the philos­o­phy liter­a­ture it’s gen­er­ally in­ter­preted as in­de­pen­dent of re­source con­straints. A quick scan of the linked SEP ar­ti­cle seems to con­firm this. Apart from the ques­tion of what Parfit said, it makes a lot of sense to con­sider the ques­tions of “what is good” and “what is fea­si­ble” sep­a­rately. And peo­ple find the claim that suffi­ciently many barely-good lives are bet­ter than fewer happy lives plenty re­pug­nant even if it has no di­rect im­pli­ca­tions for pop­u­la­tion policy. (In my opinion this is largely be­cause a life barely worth liv­ing is bet­ter than they imag­ine.)

• Firstly, ex­cel­lent post! A cool idea, well-writ­ten, and very thought-pro­vok­ing. Some thoughts on the ro­bust­ness of the re­sult:

Sup­pose that ev­ery in­di­vi­d­ual were able to pro­duce at least 1 unit of re­sources through­out their life. Then to­tal util­ity is mono­ton­i­cally in­creas­ing in the num­ber of peo­ple, and you have the re­pug­nant con­clu­sion again. How likely is this sup­po­si­tion? As­sum­ing we have ar­bi­trar­ily ad­vanced tech­nol­ogy, in­clud­ing AI, hu­mans will be pretty ir­rele­vant to the pro­duc­tion of re­sources like food or com­pute (if we’re in simu­la­tion). But plau­si­bly hu­mans could still pro­duce “goods” which are valuable to other hu­mans, like friend­ship. Let’s plug this into your model above and see what hap­pens. I’ll as­sume that hu­mans need at least 1/​K phys­i­cal re­sources to sur­vive, but oth­er­wise their util­ity is log­a­r­ith­mic in the amount of phys­i­cal re­sources + friend­ship that they get. Also, as­sume that ev­ery per­son re­ceives as much friend­ship as they pro­duce. So

, with an up­per bound of N = KR. When F >= 1, then the op­ti­mal value of N is in fact KR, and so util­ity per per­son is

, which can be ar­bi­trar­ily close to 0 (de­pend­ing on K). When 0 ⇐ F < 1, then I think you get some­thing like your origi­nal re­sult again, but I’m not sure. Em­piri­cally, I ex­pect that the best friends can pro­duce F >> 1, i.e. if you had noth­ing ex­cept just enough food/​wa­ter to keep your­self al­ive, but also you were the sole fo­cus of their friend­ship, then you’d con­sider your life well worth liv­ing. Idk about av­er­age pro­duc­tion, but hope­fully that’ll im­prove in the fu­ture too. In sum­mary, friend­ship may make things re­pug­nant :P

Here’s an­other ver­sion of the re­pug­nant con­clu­sion and your ar­gu­ment. Sup­pose that the amount of re­sources used by each per­son is roughly fixed per unit time (be­cause, say, we’re liv­ing in simu­la­tion), but that there’s a pe­riod of in­fancy and early child­hood which uses up re­sources and isn’t morally valuable. Then the re­sources used up by one per­son are I + L, where I is the length of their in­fancy and L is the length of the rest of their life, but the util­ity gained from their life is a func­tion of L alone. What func­tion of L? Per­haps you think that it’s lin­ear in L—for ex­am­ple, If you’re a he­do­nic util­i­tar­ian, it’s plau­si­ble that peo­ple will be just as happy later in their life as ear­lier. (In fact, right now, old peo­ple tend to be hap­piest). If so, you must en­dorse the anti-re­pug­nant con­clu­sion, where you’d pre­fer a pop­u­la­tion with very few very long-lived peo­ple, to min­imise the fixed cost of in­fancy. If you’re a prefer­ence util­i­tar­ian, maybe you think that there’s diminish­ing marginal util­ity to hav­ing your prefer­ences satis­fied. It then fol­lows that there’s an op­ti­mal point at which to kill peo­ple, which isn’t too soon (oth­er­wise you’re in­cur­ring high fixed costs) and isn’t too late (oth­er­wise peo­ple’s marginal util­ity diminishes too much) - a con­clu­sion which is analo­gous to your re­sult.

• Check my post on Non­lin­ear per­cep­tion of hap­piness—the log­a­r­ithm is as­sumed to be in a differ­ent place, but the part about im­pli­ca­tion to ethics con­tains a ver­sion of this ar­gu­ment.

• I don’t know to what ex­tent have oth­ers ex­plored the con­nec­tion be­tween to­tal util­i­tar­i­anism and equality

Diminish­ing marginal util­ity is one of the stan­dard ar­gu­ments for re­dis­tri­bu­tion.

• It is, but this is a spe­cial case: it has to diminish very very quickly, oth­er­wise the re­pug­nant con­clu­sion holds.

• To­tal util­i­tar­i­anism does im­ply the re­pug­nant con­clu­sion, very straight­for­wardly.

For ex­am­ple, imag­ine that world A has 1000000000000000000 peo­ple each with 10000000 util­ity and world Z has 10000000000000000000000000000000000000000 peo­ple each with 0.0000000001 util­ity. Which is bet­ter?

To­tal util­i­tar­i­anism says that you just mul­ti­ply. World A has 10^18 peo­ple x 10^7 util­ity per per­son = 10^25 to­tal util­ity. World Z has 10^40 peo­ple x 10^-10 util­ity per per­son = 10^30 to­tal util­ity. World Z is way bet­ter.

This seems re­pug­nant; in­tu­itively world Z is much worse than world A.

Parfit went through clev­erer steps be­cause he wanted his ar­gu­ment to ap­ply more gen­er­ally, not just to to­tal util­i­tar­i­anism. Even much weaker as­sump­tions can get to this re­pug­nant-seem­ing con­clu­sion that a world like Z is bet­ter than a world like A.

The point is that lots of peo­ple are con­fused about ax­iol­ogy. When they try to give opinions about pop­u­la­tion ethics, judg­ing in var­i­ous sce­nar­ios whether one hy­po­thet­i­cal world is bet­ter than an­other, they’ll wind up mak­ing judg­ments that are in­con­sis­tent with each other.

• The para­graph that I was quot­ing from was just about diminish­ing marginal util­ity and equal­ity/​re­dis­tri­bu­tion, not about the re­pug­nant con­clu­sion in par­tic­u­lar.

• This feels like one of those cases where there ought to be a mis­take some­where, given how many eyes have been on the prob­lem, and how sim­ple this ex­am­ple is. Yet I can­not find any er­rors. All it takes for the re­pug­nant con­clu­sion to be avoided is the log­a­r­ith­mic (or slower, like log(log(R/​N))) de­pen­dence of util­ity on the available re­sources. I’m im­pressed. Maybe some­one can up­date the rele­vant wiki en­try.

• The mere ad­di­tion para­dox is an ar­gu­ment that, if you ac­cept some rea­son­able-seem­ing ax­ioms about pop­u­la­tion ethics, then for any pos­i­tive hap­piness level h, if we start from a pop­u­la­tion where ev­ery­one has hap­piness level h, then for any pos­i­tive hap­piness level h’ < h, there is a larger pop­u­la­tion where ev­ery­one has hap­piness h’ that is prefer­able to the origi­nal pop­u­la­tion. Most peo­ple find this con­ter­in­tu­itive. The in­ter­est­ing thing is that ei­ther the coun­ter­in­tu­itive re­sult is true, or one of the as­sump­tions is false.

This ar­gu­ment con­tinues to ap­ply re­gard­less of how hap­piness scales with re­sources. The re­source ar­gu­ment im­plies that the prob­lem is not faced as stated when re­sources are fixed and hap­piness is log­a­r­ith­mic in re­sources, but (a) ar­tifi­cial thought ex­per­i­ments are use­ful if we are try­ing to for­mal­ize ethics, and (b) the prob­lem is still faced if re­sources in­crease at the right rate as pop­u­la­tion in­creases. There is no need to up­date the Wikipe­dia page.

• As I un­der­stand it, the idea be­hind this post dis­solves the para­dox be­cause it al­lows us to re­frame it in terms of pos­si­bil­ity: for a fixed level of re­sources, there is a num­ber of peo­ple for which equal dis­tri­bu­tion of re­sources pro­duces op­ti­mal sum of util­ity.

Sure, you could get a greater sum from an enor­mous re­pug­nant pop­u­la­tion at sub­sis­tence level, but that will take more re­sources than what you have to be cre­ated.

And what is more; even in that situ­a­tion there is always an­other non-aber­rant dis­tri­bu­tion of re­sources, that uses in to­tal the same quan­tity of re­sources as the re­pug­nant dis­tri­bu­tion, and pro­duces greater sum of util­ity.

• It doesn’t dis­solve the para­dox if it doesn’t show that you can con­struct a prefer­ence func­tion over pop­u­la­tions that doesn’t have any coun­ter­in­tu­itive prop­er­ties (while the re­pug­nant con­clu­sion ar­gu­ment im­plies it must have at least one coun­ter­in­tu­itive prop­erty). At best, it shows that the rele­vant choices are un­likely to be faced in re­al­ity, such that even a “bad” prefer­ence func­tion performs de­cently in the real world. But that doesn’t re­solve the philo­soph­i­cal prob­lem, much less dis­solve it.

I don’t think it even shows that the rele­vant choices are un­likely to be faced in re­al­ity, since situ­a­tions where you can get more re­sources by hav­ing a higher pop­u­la­tion are re­ally com­mon. (Con­sider: a higher pop­u­la­tion con­tains more work­ers)

• It dis­solves the RC for me, be­cause it an­swers the ques­tion “What kind of cog­ni­tive al­gorithm, as felt from the in­side, would gen­er­ate the ob­served de­bate about “the Repug­nant Con­clu­sion”?” [grabbed from your link, sub­sti­tuted “free will” for “re­pug­nant con­clu­sion”].

I feel af­ter read­ing that post that I do no longer feel that the RC is coun­ter­in­tu­itive, and in­stead it feels self ev­i­dent; I can chan­nel the re­pug­nancy to aber­rant dis­tri­bu­tions of re­sources.

But granted, most peo­ple I have talked to do not feel the ques­tion is dis­solved through this. I would be cu­ri­ous to see how many peo­ple stop be­ing in­tu­itively con­fused about RC af­ter read­ing a similar line of rea­son­ing.

The point about more work­ers ⇒ more re­sources is also an in­ter­est­ing thought. We could prob­a­bly ex­pand the model to vary re­sources with work­ers, and I would ex­pect a similar con­clu­sion for a rea­son­able model to hold: op­ti­mal sum of util­ity is not achieved in the ex­tremes, but in a happy medium. Either that or each ad­di­tional worker pro­duces so much that even util­ity per cap­ita grows as work­ers goes to in­finity.

• I don’t see how the post says any­thing about the cog­ni­tive al­gorithm gen­er­at­ing the re­pug­nant con­clu­sion? It’s just say­ing the choices are un­likely to be faced in re­al­ity. I think peo­ple think­ing through the re­pug­nant con­clu­sion are not nec­es­sar­ily think­ing about re­sources, they might just be think­ing about hap­piness lev­els (that’s how it’s usu­ally stated, any­way).

Here’s a sim­ple model. To­tal amount of re­sources = pop­u­la­tion + sqrt(pop­u­la­tion). Now we get a re­pug­nant con­clu­sion, it’s bet­ter to have as high a pop­u­la­tion as pos­si­ble, and ev­ery­one is liv­ing off of 1 + ep­silon re­sources.

• The move­ment I was go­ing through when think­ing about the RC is some­thing akin to “huh, hap­piness/​util­ity is not a con­cept that I have an in­tu­itive feel­ing for, so let me sub­sti­tute hap­piness/​util­ity for re­sources. Now clearly dis­tribut­ing the re­sources so thinly is very sub­op­ti­mal. So let’s sub­sti­tute back re­sources for util­ity/​hap­piness and reach the con­clu­sion that dis­tribut­ing the util­ity/​hap­piness so thinly is very sub­op­ti­mal, so I find this sce­nario re­pug­nant.”

Yeah, the sim­ple model you pro­pose beats my ini­tial in­tu­ition. It feels very off though. Maybe its miss­ing diminish­ing re­turns and I am rigged to ex­pect diminish­ing re­turns?

• This is a novel ar­gu­ment about the ap­pli­ca­bil­ity of the re­pug­nant con­clu­sion for a cer­tain form of the hap­piness de­pen­dence on wealth. A faster-than-log­a­r­ith­mic growth does not let one avoid the con­clu­sion even if the re­sources are con­strained. It looks like a pub­lish­able re­sult, let alone de­serv­ing a men­tion in the wikipe­dia.

• Log­a­r­ith­mic growth does not let you avoid it ei­ther if re­sources in­crease as pop­u­la­tion in­creases at a cer­tain rate.

The log­a­r­ithm func­tion isn’t even spe­cial here, it could just as well be that hap­piness = (re­sources − 1)^(1/​3).

• The point of the post was to in­ves­ti­gate the re­al­lo­ca­tion of ex­ist­ing re­sources to max­i­mize to­tal util­ity by cre­at­ing more less-happy peo­ple, and whether this can evade the mere ad­di­tion para­dox. In case of log­a­r­ith­mic de­pen­dence of util­ity on re­sources available, the util­ity of this re­al­lo­ca­tion peaks at a cer­tain “op­ti­mal hap­piness,” thus evad­ing the re­pug­nant con­clu­sion. Any faster growth, and the re­pug­nant con­clu­sion sur­vives. Not sure what −1 in (re­sources − 1)^(1/​3) does, haven’t done the calcu­la­tion...

• Check the math on the for­mula I gave, it also peaks, and it grows faster than log.

I don’t think it’s that in­ter­est­ing if the para­dox is not faced with a fixed level of re­sources, since the para­dox still makes it hard to con­struct an in­tu­itive for­mal­iza­tion of our prefer­ences about pop­u­la­tions that gives in­tu­itive an­swers to a va­ri­ety of pos­si­ble prob­lems, and be­sides re­sources aren’t fixed. See this post.

• Well, I posted the same ar­gu­ment in Jan­uary. Un­for­tu­nately (?) with a bunch of other more novel ideas and with­out plots and (triv­ial) bits of calcu­lus. Un­for­tu­nately (?) I did not make the bold claim the para­dox is re­solved or dis­solved, but just the claim In the real world we are always re­source con­strained and the ques­tion must be “what is the best pop­u­la­tion given the limited re­sources”, there­fore the para­dox is re­solved for most prac­ti­cal pur­poses.

• I re­mem­ber read­ing it, and get­ting lost. Looked through it again, still lost. Maybe it’s the style, or the pre­sen­ta­tion, not sure.

• If a moral hy­poth­e­sis gives the wrong an­swers on some ques­tions that we don’t face, that sug­gests it also gives the wrong an­swers on some ques­tions that we do face.

• One line of at­tack against the idea that we should re­ject the re­pug­nant con­clu­sion is to ask why the lives are barely worth liv­ing. If it’s be­cause the many peo­ple have the same good lives but they’re p-zom­bies 99.9999% of the time, I can eas­ily be­lieve that in­creas­ing the pop­u­la­tion un­til there’s more to­tal con­scious ex­pe­riences makes the trade­off worth­while.

• In thought ex­per­i­ments about util­i­tar­i­anism, it’s gen­er­ally a good idea to con­sider com­pos­ite be­ings. A bus is a util­ity mon­ster in traf­fic. If it has 30 peo­ple in it, its in­ter­ests count 30 times as much. So maybe there could be things we’d think of as one mind whose in­ter­nals mapped onto the in­ter­nals of a bus in a moral-value-pre­serv­ing way. (I guess the re­pug­nant con­clu­sion is about util­ity mon­sters but for quan­tity in­stead of qual­ity.)