Something like this sounds at first qualitatively similar to what I have in mind but isn’t really representative of my thought process. Here are some key differences/clarifications that would help convey my thought process:
1. Clarify that U=happiness-tan(suffering) applies to each individual’s happiness and suffering (and then the global utility function is calculated by summing over all people) rather than the universe’s total suffering and total happiness as I talk about here. People often talk about this implicitly but I thnk being clear about this is useful.
2. I don’t want a utility function that ends with something just going to infinity because it can get confused when asked questions like “Would you prefer this infinitely bad thing to happen for five minutes or ten minutes?” since both are infinite. This is why value-lexicality as shown in figure 1b is important. Many different events can be infinitely worse than other things from the inside-view and it’s important that our utility function is capable of comparing between them.
3. Clarify what is meant by “happiness” and “suffering.” As I mention here, I agree with Ord’s Worse-For-Everyone argument and metrics of happiness and suffering are often literally metrics of how much we should value or disvalue an experience—tautologically implying any given utility function of them should be a straight line always and regardless of intensity. Going by this definition, I would never make the claim that some finite amount of suffering should be treated as infinitely bad like tan(suffering) would suggest. Instead, my intuition is essentially that, from the inside-view, certain experiences are perceived as involving infinitely bad (or lexically worse) suffering so that if our definition of suffering is based on the inside view (which I think is reasonable), then the amount of suffering experienced can become infinite. I don’t value a finite amount of suffering infinitely; I just think that suffering of effectively infinite magnitude might be possible.
Alternatively, if we define happiness and suffering in terms of physical experiences rather than something subjective. My utility function for experience E would probably look something more like asking what (and how much) a person would be willing to experience in order to make E stop. This could be approximated by something like U=happiness-tan(suffering) if the physical experience is defined appropriately in the appropriate domain. For example, if suffering represents an above-room-temperature temperature that the person is subjected to for five hours, the disutility might look locally like -tan(suffering) for an appropriate temperature range of maybe 100-300 degrees Fahrenheit. But this kind of claim is more of an empirical statement about how I think about suffering than it is the actual way I think about suffering.
Something like this sounds at first qualitatively similar to what I have in mind but isn’t really representative of my thought process. Here are some key differences/clarifications that would help convey my thought process:
1. Clarify that U=happiness-tan(suffering) applies to each individual’s happiness and suffering (and then the global utility function is calculated by summing over all people) rather than the universe’s total suffering and total happiness as I talk about here. People often talk about this implicitly but I thnk being clear about this is useful.
2. I don’t want a utility function that ends with something just going to infinity because it can get confused when asked questions like “Would you prefer this infinitely bad thing to happen for five minutes or ten minutes?” since both are infinite. This is why value-lexicality as shown in figure 1b is important. Many different events can be infinitely worse than other things from the inside-view and it’s important that our utility function is capable of comparing between them.
3. Clarify what is meant by “happiness” and “suffering.” As I mention here, I agree with Ord’s Worse-For-Everyone argument and metrics of happiness and suffering are often literally metrics of how much we should value or disvalue an experience—tautologically implying any given utility function of them should be a straight line always and regardless of intensity. Going by this definition, I would never make the claim that some finite amount of suffering should be treated as infinitely bad like tan(suffering) would suggest. Instead, my intuition is essentially that, from the inside-view, certain experiences are perceived as involving infinitely bad (or lexically worse) suffering so that if our definition of suffering is based on the inside view (which I think is reasonable), then the amount of suffering experienced can become infinite. I don’t value a finite amount of suffering infinitely; I just think that suffering of effectively infinite magnitude might be possible.
Alternatively, if we define happiness and suffering in terms of physical experiences rather than something subjective. My utility function for experience E would probably look something more like asking what (and how much) a person would be willing to experience in order to make E stop. This could be approximated by something like U=happiness-tan(suffering) if the physical experience is defined appropriately in the appropriate domain. For example, if suffering represents an above-room-temperature temperature that the person is subjected to for five hours, the disutility might look locally like -tan(suffering) for an appropriate temperature range of maybe 100-300 degrees Fahrenheit. But this kind of claim is more of an empirical statement about how I think about suffering than it is the actual way I think about suffering.