If you mean Yvain’s, while his stuff is in general excellent, I recommend learning about philosophical nomenclature from actual philosophers, not medics.
Ef_Re
Karma: 7
That would normally be referred to as consequentialism, not utilitarianism.
To the extent that lesswrong has an official ethical system, that system is definitely not utilitarianism.
I hope the forum’s participants end up discussing a well-balanced set of topics in EA, so that e.g. we don’t end up with 10% of conversations being about AGI risk mitigation while 1% of conversations are about policy interventions.
I assume balance would imply 99% about AGI risk mitigation, and 0.001% about (non-AGI) policy interventions?
I donated!