There is quite the difference between “our understanding is still pre-Socratic” and “we haven’t said enough”. In general I think very few people here (not sure whether this applies to you) are familiar with the philosophical literature on topics in this area. For example, there is very little interest on LessWrong in normative ethics and the associated philosophical research. Even though this is directly related to alignment, since, if you you have an intent-aligned ASI (which is probably easier to achieve than shooting straight for value alignment) you probably need to know what ethics it should implement when asking it to create a fully value-aligned ASI.
Interestingly, the situation is quite different for the EA Forum, where there are regular high-quality posts on solving issues in normative ethics with reference to the academic literature, like the repugnant conclusion, the procreation asymmetry and the status of person-affecting theories. Any satisfactory normative ethical theory needs to solve these problems, similar to how any satisfactory normative theory of epistemic rationality needs to solve the various epistemic paradoxes and related issues.
Again, I don’t know whether this applies to you, but most cases of “philosophy has made basically no progress on topic X” seem to come from people who have very little knowledge of the philosophical literature on topic X.
The Parfit quote from the blog post is taken out of context. Here is the relevant section in Parfit’s essay:
(Each box represents a possible population, with the height of a box representing how good overall an individual life is in that population, and the width representing the size of the population. The area of a box is the sum total “goodness”/”welfare”/”utility” (e.g. well-being, satisfied preferences, etc) in that population. The areas increase from A to Z, with Z being truncated here.)
Note that Parfit describes two different ways in which an individual life in Z could be barely worth living (emphasis added):
Then he goes on to describe the second possibility (which is arguably unrealistic and much less likely than the first, and which contains the quote by the blog author). The author of the blog posts mistakenly ignores Parfit’s mentioning the first possibility. After talking about the second, Parfit returns (indicated by “similarly”) to the first possibility:
The “greatest quantity” here can simply be determined by the weight of all the positive things in an individual life minus the weight of all the negative things. Even if the result is just barely positive for an individual, for a large enough population, the sum welfare of the “barely net positive” individual lives would outweigh the sum for a smaller population with much higher average welfare. Yet intuitively, we should not trade a perfect utopia with relatively small population (A) for a world that is barely worth living for everyone in a huge population (Z).
That’s the problem with total utilitarianism, which simply sums all the “utilities” of the individual lives to measure the overall “utility” of a population. Taking the average instead of the sum avoids the repugnant conclusion, but it leads to other highly counterintuitive conclusions, such as that a population of a million people suffering strongly is less bad than a population of just a single person suffering slightly more strongly, as the latter has a worse average. So arguably both total and average utilitarianism are incorrect, at least without strong modifications.
(Personally I think a sufficiently developed version of person-affecting utilitarianism (an alternative to average and total utilitarianism) might well solve all these problems, though the issue is very difficult. See e.g. here.)