EA does not make a precise claim about what promoting welfare consists in (for example, whether it’s more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.
That’s rather a double standard there. Any specific form of EA does make a precise claim about what should be maximized.
That’s rather a double standard there. Any specific form of EA does make a precise claim about what should be maximized.