My earlier comment is not to imply that I think “maximization of human happiness” is the most preferred goal.
An easily obvious one, yes. But faulty; “human” is a severely underspecified term.
In fact, I think that putting in place a One True Global Goal would require ultimate knowledge about the nature of being, to which we do not have access currently.
Possibly, the best we can do is come up with plausible global goal that suits us for medium run, while we try to find out more.
That is, after all, what we have always done as human beings.
My earlier comment is not to imply that I think “maximization of human happiness” is the most preferred goal.
An easily obvious one, yes. But faulty; “human” is a severely underspecified term.
In fact, I think that putting in place a One True Global Goal would require ultimate knowledge about the nature of being, to which we do not have access currently.
Possibly, the best we can do is come up with plausible global goal that suits us for medium run, while we try to find out more.
That is, after all, what we have always done as human beings.