The error is that it’s humans who are attempting to implement the utilitarianism. I’m not talking about hypothetical non-human intelligences, and I don’t think they were implied in the context.
See also Ends Don’t Justify Means (Among Humans): having non-consequentialist rules (e.g. “Thou shalt not murder, even if it seems like a good idea”) can be consequentially desirable since we’re not capable of being ideal consequentialists.
Oh, indeed. But when you’ve repeatedly emphasised “shut up and multiply”, tacking “btw don’t do anything weird” on the end strikes me as susceptible to your readers not heeding it, particularly when they really need to.
I don’t think hypothetical superhuman would be dramatically different in their ability to employ predictive models upon uncertainty. If you increase power so it is to mankind as mankind is to 1 amoeba, you only double anything that is fundamentally logarithmic. While in many important cases there are faster approximations, it’s magical thinking to expect them everywhere; and there are problems where the errors inherently grow exponentially with time even if the model is magically perfect (butterfly effect). Plus, of course, models of other intelligences rapidly get unethical as you try to improve fidelity (if it is emulating people and putting them through torture and dust speck experience to compare values).
The error is that it’s humans who are attempting to implement the utilitarianism. I’m not talking about hypothetical non-human intelligences, and I don’t think they were implied in the context.
See also Ends Don’t Justify Means (Among Humans): having non-consequentialist rules (e.g. “Thou shalt not murder, even if it seems like a good idea”) can be consequentially desirable since we’re not capable of being ideal consequentialists.
Oh, indeed. But when you’ve repeatedly emphasised “shut up and multiply”, tacking “btw don’t do anything weird” on the end strikes me as susceptible to your readers not heeding it, particularly when they really need to.
I don’t think hypothetical superhuman would be dramatically different in their ability to employ predictive models upon uncertainty. If you increase power so it is to mankind as mankind is to 1 amoeba, you only double anything that is fundamentally logarithmic. While in many important cases there are faster approximations, it’s magical thinking to expect them everywhere; and there are problems where the errors inherently grow exponentially with time even if the model is magically perfect (butterfly effect). Plus, of course, models of other intelligences rapidly get unethical as you try to improve fidelity (if it is emulating people and putting them through torture and dust speck experience to compare values).