“I wont let the world be destroyed because then rationality can’t influence the future” is an attempt to avoid weighing your love of rationality against anything else.
Think about it. Is it really that rationality isn’t in control any more that bugs you, not everyone dying, or the astronomical number of worthwhile lives that will never be lived?
If humanity dies to a paperclip maximizer, which goes on to spread copies of itself through the universe to oversee paperclip production, each of those copies being rational beyond what any human can achieve, is that okay with you?
Thank you, I initially wrote my function with the idea of making it one (of many) “lower bound”(s) of how bad things could possibly get before debating dishonestly becomes necessary. Later, I mistakenly thought that “this works fine as a general theory, not just a lower bound”.
Downvoted for the fake utility function.
“I wont let the world be destroyed because then rationality can’t influence the future” is an attempt to avoid weighing your love of rationality against anything else.
Think about it. Is it really that rationality isn’t in control any more that bugs you, not everyone dying, or the astronomical number of worthwhile lives that will never be lived?
If humanity dies to a paperclip maximizer, which goes on to spread copies of itself through the universe to oversee paperclip production, each of those copies being rational beyond what any human can achieve, is that okay with you?
Thank you, I initially wrote my function with the idea of making it one (of many) “lower bound”(s) of how bad things could possibly get before debating dishonestly becomes necessary. Later, I mistakenly thought that “this works fine as a general theory, not just a lower bound”.
Thank you for helping me think more clearly.