i feel like (2)/(3) is about “what does (the altruistic part of) my utility function want?” and 4 is “how do i decision-theoretically maximize said utility function?”. they’re different layers, and ultimately it’s (2)/(3) we want to maximize, but maximizing (2)/(3) entails allocating some of the future lightcore to (4).
i feel like (2)/(3) is about “what does (the altruistic part of) my utility function want?” and 4 is “how do i decision-theoretically maximize said utility function?”. they’re different layers, and ultimately it’s (2)/(3) we want to maximize, but maximizing (2)/(3) entails allocating some of the future lightcore to (4).