I don’t think the human brain’s equivalent to a utility function is unbounded. Dopamine levels and endorphin levels are limited—and it seems tremendously unlikely that the brain deals with infinities in its usual mode of operation. So, this is all very hypothetical.
I don’t think the human brain’s equivalent to a utility function is unbounded. Dopamine levels and endorphin levels are limited—and it seems tremendously unlikely that the brain deals with infinities in its usual mode of operation. So, this is all very hypothetical.
This doesn’t have much to do with my preferences. I might experience the same level of negative emotion when thinking about Busy Beaver (10) people being tortured as opposed to Busy Beaver(1000) people being tortured, but I still have a preference for which one I’d like if I have to choose between the two.
Some mechanism in your (finite) brain is still making that decision.
Sure. But I can express a preference about infinitely many cases in a finite statement. In particular, my preferences includes something like the following: given the existence of k sentient, sapient entities, and given i < j ⇐ k, I prefer i entities getting tortured to j entities getting tortured assuming everything else is otherwise identical.
Alas, your brain can’t handle those numbers—beyond a certain point. They can’t even be input into your brain in your lifetime.
If we are talking about augmenting your brain with a machine, so it is able to deal with these huge numbers, those aren’t really the preferences of a human being any more—and you still don’t get to “unbounded” in a finite time—due to the finite visible universe.
I’m not sure how utility (and expected utility) are physically represented in the human brain. Dopamine levels and endorphin levels are the most obvious candidates, but there are probably also various proxies. However, I figure a 16-bit number would probably cover it pretty well. It may seem counter-intuitive—but you don’t really need more than that to make decisions of the type you describe—even for numbers of people with (say) 256-bit representations.
Omega comes up to you and offers you a choice, it will kill either n or 2n people depending on what you ask. When you ask what n is Omega explains that it is an integer, but is unfortunately far too large to define it within your lifetime. Would you not still pick n in this dilemma? I know I would.
This isn’t quite enough to prove an unbounded utility function, but if we slightly modify the choice so it is n people die with certainty versus 2n people die with 99.999% probability and nobody dies with 0.001% probability then it is enough.
Your brain could probably make that kind of decision with only a few bits of utility. The function would go: lots-of-death bad, not-so-much-death not so bad. IMO, in no way is that evidence that the brain represents unbounded utilities.
The function would go: lots-of-death bad, not-so-much-death not so bad.
Try using numbers. If you try to bound the function, there will be a sufficiently large n where you will prefer the 99.999% probability of 2n people dying to 100% of n people dying.
To recap, I objected: “your brain can’t handle those numbers”. To avoid the huge numbers, they were replaced with “n”—and a bizarre story about an all-knowing being. If you go back to the numbers, we are back to the first objection again—there are some numbers that are too big for unmodified humans to handle. No, I can’t tell you which numbers—but they are out there.
The grandparent is a reductio of your assertion (and thus, if you agree that “not-so-much-death is not so bad”, a disproof). You seem to be questioning the validity of algebra rather than retracting the claim. Do you have a counterargument?
I’d suggest that you may be able to argue that the brain does not explicitly implement a utility function as such, which makes sense because utility functions are monstrously complex. Instead, the brain likely implements a bunch of heuristics and other methods of approximating / instantiating a set of desires that could hypothetically be modeled by a utility function (that is unbounded).
The grandparent is a reductio of your assertion (and thus, if you agree that “not-so-much-death is not so bad”, a disproof). You seem to be questioning the validity of algebra rather than retracting the claim. Do you have a counterargument?
“your brain can’t handle those numbers” wasn’t “questioning the validity of algebra”. It was questioning whether the human brain can represent—or even receive—the large numbers in question.
To avoid the huge numbers, they were replaced with “n”
as though this were somehow an indictment of the argument.
Anyway, the important thing is: several people have already explained how a finite system can express an unbounded utility function without having to explicitly express numbers of unbounded size.
To avoid the huge numbers, they were replaced with “n”
as though this were somehow an indictment of the argument.
Dragging in Omega to represent the huge quantities for the human seems to have been a desperate move.
Anyway, the important thing is: several people have already explained how a finite system can express an unbounded utility function without having to explicitly express numbers of unbounded size.
Well, that’s OK—but the issue is what shape the human utility function is. You can’t just extrapolate out to infinity from a small number of samples near to the origin!
I think there are limits to human happiness and pain—and whatever else you care to invoke as part of the human utility function—so there’s actually a finite representation with bounded utility—and I think that it is the best approximation to what the brain is actually doing.
A fallible process. Pain might seem proportional to number of lashes at first—but keep going for a while, and you will see that they have a non-linear relationship.
Dopamine levels and endorphin levels are not utility functions. At best they are “hedons”, and even that’s not indisputably clear—there’s more to happiness than that.
A utility function is itself not something physical. It is one (often mathematically convenient) way of summarizing an agent’s preferences in making decisions. These preferences are of course physical. Note, for instance, that everything observable is completely invariant under arbitrary positive affine transformations. Even assuming our preferences can be described by a utility function (i.e. they are consistent—but we know they’re not), it’s clear that putting an upper bound on it would no longer agree with the decisions made by a utility function without such a bound.
Dopamine levels and endorphin levels are not utility functions. At best they are “hedons”, and even that’s not indisputably clear—there’s more to happiness than that.
Well, the brain represents utility somehow, as part of its operation. It rather obviously compares expected utilities of future states.
I didn’t say dopamine levels and endorphin levels were utility functions. The idea is that they are part of the brain’s representation of expected utility—and utility.
Well, the brain represents utility somehow, as part of its operation. It rather obviously compares expected utilities of future states.
No. You’ve entirely missed my point. The brain makes decisions. Saying it does so via representing things as utilities is a radical and unsupported assumption. It can be useful to model people as making decisions according to a utility function, as this can compress our description of it, often with only small distortions. But it’s still just a model. Unboundedness in our model of a decision maker has nothing to do with unboundedness in the decision maker we are modeling. This is a basic map/territory confusion (or perhaps advanced: our map of their map of the territory is not the same as their map of the territory).
Not exactly an assumption. We can see—more-or-less—how the fundamental reward systems in the brain work. They use neurotransmitter concentrations and firing frequencies to represent desire and and aversion—and pleasure and pain. These are the physical representation of utility, the brain’s equivalent of money. Neurotransmitter concentrations and neuron firing frequencies don’t shoot off to infinity. They saturate—resulting in pleasure and pain saturation points.
I see little indication that the brain is in the assigning absolute utilities business at all. Things like scope insensitivity seem to suggest that it only assigns relative utilities, comparing to a context-dependent default.
They are feedback signals, certainly. Every system with any degree of intelligence must have those. But feedback signals, utility and equivalent of money are not synonyms. To say a system’s feedback signals are equivalent to money is to make certain substantive claims about its design. (e.g. some but not most AI programs have been designed with those properties.) To say they are utility measurements is to make certain other substantive claims about its design. Neither of those claims is true about the human brain in general.
You argued that human utility is bounded because dopamine is bounded, and dopamine is part of how utility is represented. Yes? The obvious objection to your argument is that the representation could in principle take one of many different forms, some of which allow us to represent something unbounded by means of something bounded. If that were the case, then the boundedness of dopamine would not imply the boundedness of utility.
If you want an example of how this representation might be done, here’s one: if you prefer state A to state B, this is (hypothetically) represented by the fact that if you move from state B to state A your dopamine level is raised temporarily—and after some interval, it drops again to a default level. So, every time you move from a less preferred state to a more preferred state, i.e. from lower utility to higher utility, your dopamine level is raised temporarily and then drops back. The opposite happens if you move from higher utility to lower utility.
Though I have offered this as a hypothetical, from the little bit that I’ve read in the so-called “happiness” literature, something like this seems to be what actualyl goes on. If you receive good fortune, you get especially happy for a bit, and then you go back to a default level of happiness. And conversely, if you suffer some misfortune, you become unhappy for a bit, and then you go back to a default level of happiness.
Unfortunately, a lot of people seem to draw what I think is a perverse lesson from this phenomenon, which is that good and bad fortune does not really matter, because no matter what happens to us, in the end we find ourselves at the default level of happiness. In my view, utility should not be confused with happiness. If a man becomes rich and, in the end, finds himself no happier than before, I don’t think that that is a valid argument against getting rich. Rather, temporary increases and decreases in happiness is how our brains mark permanent increases and decreases in utility. That the happiness returns to default does not mean that utility returns to default.
You argued that human utility is bounded because dopamine is bounded, and dopamine is part of how utility is represented. Yes?
No. What I actually said was:
The idea is that they [Dopamine levels and endorphin levels] are part of the brain’s representation of expected utility—and utility.
I do think an unbounded human-equivalent utility function is not supported by any evidence. I reckon Hutter’s [0,1] utility would be able to simulate humans just fine on digital hardware.
I didn’t say that you equated utility with dopamine. [edit: I was replying to an earlier draft of your comment. As of now you’ve changed the comment to delete the claim that I had said that you equated utility with dopamine, though you retained an unexplained “no”.] I said that you said that dopamine is part of how utility is represented. Your quote appears to confirm my statement. You quote yourself saying “[Dopamine levels and endorphin levels] are part of the brain’s representation of expected utility—and utility.” Among other things, this says that dopamine is part of the brain’s representation of utility. Which is virtually word for word what I said you said, the main difference being that instead of saying “the brain’s representation of utility”, I said, “how utility is represented”. I don’t see any real difference here—just slightly different wording.
Moreover, the key statement that I am basing my interpretation on is not that, but this:
I don’t think the human brain’s equivalent to a utility function is unbounded. Dopamine levels and endorphin levels are limited—and it seems tremendously unlikely that the brain deals with infinities in its usual mode of operation. So, this is all very hypothetical.
Here you are arguing that the human brain’s equivalent to a utility function is bounded, and your apparent argument for this is that dopamine and endorphin levels are limited.
I argued that the limitation of dopamine and endorphin levels does not imply that the human brain’s equivalent to a utility function is bounded. You have not addressed my argument, only claimed—incorrectly, it would appear—that I had misstated your argument.
I note that your characterisation of my argument models very, very poorly all the times I talked about the finite nature of the human brain on this thread.
I was pointing out that there was more to the arguments I have given than what you said. The statement you used to characterise my position was a false syllogism—but it doesn’t represent my thinking on the topic very well.
it seems tremendously unlikely that the brain deals with infinities in its usual mode of operation.
Unbounded is not the same as infinite. The integers are unbounded but no integer is infinite. In the same way I can have a utility function with no upper bound on the values it outputs without it ever having to output infinity.
No, it doesn’t. I think you folk are all barking up the wrong tree.
The case for unbounded utilities rests on brains actually using something like surreal numbers to represent infinite utilities. I don’t think there’s any significant evolutionary pressure favouring such a thing—or any evidence that humans actually behave that way—but at least that is a theoretical possibility.
Absent such evidence, I think Occam’s razor favours simple finite utilities that map onto the reinforcement-learning machinery evident in the brain.
Exactly. That is becaues the stuff about the finite human brain represeting unboundedly huge utilities is obvious nonsense. That is why people are roping in Omega and infinite time—desperation.
My 1,300 cm3 is capable of understanding the function f(x) = 3x, which is unbounded, therefore finite physical size does not prevent the brain from dealing with unbounded functions.
In general, a finite machine can easily deal with unbounded numbers simply by taking unbounded amounts of time to do so. This is not as much of a problem as it may sound, since there will intevitably be an upper bound to the utilities involved in all dilemma’s I actually encounter (unless my lifespan is infinite) but not the utilities I could, in theory, compute.
This is an augmented human, with a strap-on memory source bigger than the size of the planet? I thought we were probably talking about an ordinary human being—not some abstract sci-fi human that will never actually exist.
Who said anything about an augmented human, my comment was written in the first person except for one sentence, and I certainly don’t have a strap-on memory source bigger than a planet, but despite this I’m still pretty confident that I have an unbounded utility function.
No it is not hypothetical. If you build an AI with unbounded utility functions, yet human utility functions are (mostly) bounded, then you have built a (mostly) unfriendly AI. An AI that will be willing to sacrifice arbitrarily large amounts of current human utility in order to gain the resources to create a wonderful future for hypothetical future humans.
That’s diferent, though. The hypothetical I was objecting to was humans having unbounded utility functions. I think that idea is a case of making things up.
FWIW, I stand by the idea that instrumental discounting means that debating ultimate discounting vs a lack of ultimate discounting mostly represents a storm in a teacup. In practice, all agents do instrumental discounting—since the future is uncertain and difficult to directly influence.
Any debate here should really be over whether ultimate discounting on a timescale of decades is desirable—or not.
This has been rather surreal. I express what seems to me to be a perfectly ordinary position—that the finite human brain is unlikely to represent unbounded utilities—or to go in for surreal utilities—and a bunch of people have opined, that somehow, the brain does represent unboundedly large utilities—using mechanisms unspecified.
When pressed, infinite quantities of time are invoked. Omega is invited onto the scene—to represent the unbounded numbers for the human. Uh...
I don’t mean to be rude—but do you folk really think you are being rational here? This looks more like rationalising to me.
Is there any evidence for unbounded human utilities? What would make anyone think this is so?
Several mechanisms for expressing unbounded utility functions (NOT unbounded utilities) have been explained. The distinction has been explained. Several explicit examples have been provided.
At the very least, you should update a little based on the resistance you’re experiencing.
As it stands, it looks like you’re not making a good-faith attempt to understand the arguments against your position.
Well, I think I can see the other side. People seem to be thinking that utility in deaths (for example) behaves linearly out to infinity. The way utilitarian ethicists dream about.
I don’t think that is how the brain works. Scope insensitivity shows that most humans deal badly with the large numbers involved—in a manner quite consistent with bounded utility. There is a ceiling effect for pain and for various pleasure-inducing drugs. Those who claim to have overcome scope insensitivity haven’t really changed the underlying utility function used by the human brain. They have just tried to hack it a little—using sophisticated cultural manipulations. Their brain still uses the same finite utilities and utility functions underneath—and it can still be well-modelled that way.
Indeed, I figure you will get more accurate models that way than if you project out to infinity—more accurately reproducing some types of scope insensitivity, for instance.
Sorry, I think I’m going to have to bow out at this point. It still looks like you’re arguing against fictitious positions (like “unbounded utility functions produce infinite utilities”) and failing to deal with the explicit counterexamples provided.
I don’t think the human brain’s equivalent to a utility function is unbounded. Dopamine levels and endorphin levels are limited—and it seems tremendously unlikely that the brain deals with infinities in its usual mode of operation. So, this is all very hypothetical.
This doesn’t have much to do with my preferences. I might experience the same level of negative emotion when thinking about Busy Beaver (10) people being tortured as opposed to Busy Beaver(1000) people being tortured, but I still have a preference for which one I’d like if I have to choose between the two.
Some mechanism in your (finite) brain is still making that decision.
Sure. But I can express a preference about infinitely many cases in a finite statement. In particular, my preferences includes something like the following: given the existence of k sentient, sapient entities, and given i < j ⇐ k, I prefer i entities getting tortured to j entities getting tortured assuming everything else is otherwise identical.
Alas, your brain can’t handle those numbers—beyond a certain point. They can’t even be input into your brain in your lifetime.
If we are talking about augmenting your brain with a machine, so it is able to deal with these huge numbers, those aren’t really the preferences of a human being any more—and you still don’t get to “unbounded” in a finite time—due to the finite visible universe.
I’m not sure how utility (and expected utility) are physically represented in the human brain. Dopamine levels and endorphin levels are the most obvious candidates, but there are probably also various proxies. However, I figure a 16-bit number would probably cover it pretty well. It may seem counter-intuitive—but you don’t really need more than that to make decisions of the type you describe—even for numbers of people with (say) 256-bit representations.
Think about it this way:
Omega comes up to you and offers you a choice, it will kill either n or 2n people depending on what you ask. When you ask what n is Omega explains that it is an integer, but is unfortunately far too large to define it within your lifetime. Would you not still pick n in this dilemma? I know I would.
This isn’t quite enough to prove an unbounded utility function, but if we slightly modify the choice so it is n people die with certainty versus 2n people die with 99.999% probability and nobody dies with 0.001% probability then it is enough.
Your brain could probably make that kind of decision with only a few bits of utility. The function would go: lots-of-death bad, not-so-much-death not so bad. IMO, in no way is that evidence that the brain represents unbounded utilities.
Try using numbers. If you try to bound the function, there will be a sufficiently large n where you will prefer the 99.999% probability of 2n people dying to 100% of n people dying.
To recap, I objected: “your brain can’t handle those numbers”. To avoid the huge numbers, they were replaced with “n”—and a bizarre story about an all-knowing being. If you go back to the numbers, we are back to the first objection again—there are some numbers that are too big for unmodified humans to handle. No, I can’t tell you which numbers—but they are out there.
The grandparent is a reductio of your assertion (and thus, if you agree that “not-so-much-death is not so bad”, a disproof). You seem to be questioning the validity of algebra rather than retracting the claim. Do you have a counterargument?
I’d suggest that you may be able to argue that the brain does not explicitly implement a utility function as such, which makes sense because utility functions are monstrously complex. Instead, the brain likely implements a bunch of heuristics and other methods of approximating / instantiating a set of desires that could hypothetically be modeled by a utility function (that is unbounded).
“your brain can’t handle those numbers” wasn’t “questioning the validity of algebra”. It was questioning whether the human brain can represent—or even receive—the large numbers in question.
What you said was:
as though this were somehow an indictment of the argument.
Anyway, the important thing is: several people have already explained how a finite system can express an unbounded utility function without having to explicitly express numbers of unbounded size.
Dragging in Omega to represent the huge quantities for the human seems to have been a desperate move.
Well, that’s OK—but the issue is what shape the human utility function is. You can’t just extrapolate out to infinity from a small number of samples near to the origin!
I think there are limits to human happiness and pain—and whatever else you care to invoke as part of the human utility function—so there’s actually a finite representation with bounded utility—and I think that it is the best approximation to what the brain is actually doing.
Some people can. It’s called proof by induction.
This is not how proof by induction works.
If you think the proof is flawed, find a counterexample.
A real, independently-verifiable counterexample, not just a nebulous spot on the number line where a counterexample might exist.
The proof by induction is correct. “Extrapolating from a small number of samples”, however, is not proof by induction.
A fallible process. Pain might seem proportional to number of lashes at first—but keep going for a while, and you will see that they have a non-linear relationship.
Dopamine levels and endorphin levels are not utility functions. At best they are “hedons”, and even that’s not indisputably clear—there’s more to happiness than that.
A utility function is itself not something physical. It is one (often mathematically convenient) way of summarizing an agent’s preferences in making decisions. These preferences are of course physical. Note, for instance, that everything observable is completely invariant under arbitrary positive affine transformations. Even assuming our preferences can be described by a utility function (i.e. they are consistent—but we know they’re not), it’s clear that putting an upper bound on it would no longer agree with the decisions made by a utility function without such a bound.
Well, the brain represents utility somehow, as part of its operation. It rather obviously compares expected utilities of future states.
I didn’t say dopamine levels and endorphin levels were utility functions. The idea is that they are part of the brain’s representation of expected utility—and utility.
No. You’ve entirely missed my point. The brain makes decisions. Saying it does so via representing things as utilities is a radical and unsupported assumption. It can be useful to model people as making decisions according to a utility function, as this can compress our description of it, often with only small distortions. But it’s still just a model. Unboundedness in our model of a decision maker has nothing to do with unboundedness in the decision maker we are modeling. This is a basic map/territory confusion (or perhaps advanced: our map of their map of the territory is not the same as their map of the territory).
Not exactly an assumption. We can see—more-or-less—how the fundamental reward systems in the brain work. They use neurotransmitter concentrations and firing frequencies to represent desire and and aversion—and pleasure and pain. These are the physical representation of utility, the brain’s equivalent of money. Neurotransmitter concentrations and neuron firing frequencies don’t shoot off to infinity. They saturate—resulting in pleasure and pain saturation points.
I see little indication that the brain is in the assigning absolute utilities business at all. Things like scope insensitivity seem to suggest that it only assigns relative utilities, comparing to a context-dependent default.
They are feedback signals, certainly. Every system with any degree of intelligence must have those. But feedback signals, utility and equivalent of money are not synonyms. To say a system’s feedback signals are equivalent to money is to make certain substantive claims about its design. (e.g. some but not most AI programs have been designed with those properties.) To say they are utility measurements is to make certain other substantive claims about its design. Neither of those claims is true about the human brain in general.
You argued that human utility is bounded because dopamine is bounded, and dopamine is part of how utility is represented. Yes? The obvious objection to your argument is that the representation could in principle take one of many different forms, some of which allow us to represent something unbounded by means of something bounded. If that were the case, then the boundedness of dopamine would not imply the boundedness of utility.
If you want an example of how this representation might be done, here’s one: if you prefer state A to state B, this is (hypothetically) represented by the fact that if you move from state B to state A your dopamine level is raised temporarily—and after some interval, it drops again to a default level. So, every time you move from a less preferred state to a more preferred state, i.e. from lower utility to higher utility, your dopamine level is raised temporarily and then drops back. The opposite happens if you move from higher utility to lower utility.
Though I have offered this as a hypothetical, from the little bit that I’ve read in the so-called “happiness” literature, something like this seems to be what actualyl goes on. If you receive good fortune, you get especially happy for a bit, and then you go back to a default level of happiness. And conversely, if you suffer some misfortune, you become unhappy for a bit, and then you go back to a default level of happiness.
Unfortunately, a lot of people seem to draw what I think is a perverse lesson from this phenomenon, which is that good and bad fortune does not really matter, because no matter what happens to us, in the end we find ourselves at the default level of happiness. In my view, utility should not be confused with happiness. If a man becomes rich and, in the end, finds himself no happier than before, I don’t think that that is a valid argument against getting rich. Rather, temporary increases and decreases in happiness is how our brains mark permanent increases and decreases in utility. That the happiness returns to default does not mean that utility returns to default.
No. What I actually said was:
I do think an unbounded human-equivalent utility function is not supported by any evidence. I reckon Hutter’s [0,1] utility would be able to simulate humans just fine on digital hardware.
I didn’t say that you equated utility with dopamine. [edit: I was replying to an earlier draft of your comment. As of now you’ve changed the comment to delete the claim that I had said that you equated utility with dopamine, though you retained an unexplained “no”.] I said that you said that dopamine is part of how utility is represented. Your quote appears to confirm my statement. You quote yourself saying “[Dopamine levels and endorphin levels] are part of the brain’s representation of expected utility—and utility.” Among other things, this says that dopamine is part of the brain’s representation of utility. Which is virtually word for word what I said you said, the main difference being that instead of saying “the brain’s representation of utility”, I said, “how utility is represented”. I don’t see any real difference here—just slightly different wording.
Moreover, the key statement that I am basing my interpretation on is not that, but this:
Here you are arguing that the human brain’s equivalent to a utility function is bounded, and your apparent argument for this is that dopamine and endorphin levels are limited.
I argued that the limitation of dopamine and endorphin levels does not imply that the human brain’s equivalent to a utility function is bounded. You have not addressed my argument, only claimed—incorrectly, it would appear—that I had misstated your argument.
I note that your characterisation of my argument models very, very poorly all the times I talked about the finite nature of the human brain on this thread.
You are seriously referring me to your entire oeuvre as a supposed explanation of what you meant in the specific comment that I was replying to?
I was pointing out that there was more to the arguments I have given than what you said. The statement you used to characterise my position was a false syllogism—but it doesn’t represent my thinking on the topic very well.
Unbounded is not the same as infinite. The integers are unbounded but no integer is infinite. In the same way I can have a utility function with no upper bound on the values it outputs without it ever having to output infinity.
The human brain is limited to around 1,300 cm3. It is finite. It seems unlikely that it represents unbounded quantities for utilities.
The Peano axioms are finite. The numbers they describe are unbounded. Finite human brains understand this.
What does that have to do with how human-equivalent utility functions work?
Turing machine tapes are unbounded, but real things are not—they are finite. The human brain is finite and tiny. It is not remotely unbounded.
It shows that your conclusions from human brains being finite don’t follow.
No, it doesn’t. I think you folk are all barking up the wrong tree.
The case for unbounded utilities rests on brains actually using something like surreal numbers to represent infinite utilities. I don’t think there’s any significant evolutionary pressure favouring such a thing—or any evidence that humans actually behave that way—but at least that is a theoretical possibility.
Absent such evidence, I think Occam’s razor favours simple finite utilities that map onto the reinforcement-learning machinery evident in the brain.
Note that I never said or implied that it was.
You said:
Exactly. That is becaues the stuff about the finite human brain represeting unboundedly huge utilities is obvious nonsense. That is why people are roping in Omega and infinite time—desperation.
My 1,300 cm3 is capable of understanding the function f(x) = 3x, which is unbounded, therefore finite physical size does not prevent the brain from dealing with unbounded functions.
In general, a finite machine can easily deal with unbounded numbers simply by taking unbounded amounts of time to do so. This is not as much of a problem as it may sound, since there will intevitably be an upper bound to the utilities involved in all dilemma’s I actually encounter (unless my lifespan is infinite) but not the utilities I could, in theory, compute.
This is an augmented human, with a strap-on memory source bigger than the size of the planet? I thought we were probably talking about an ordinary human being—not some abstract sci-fi human that will never actually exist.
Who said anything about an augmented human, my comment was written in the first person except for one sentence, and I certainly don’t have a strap-on memory source bigger than a planet, but despite this I’m still pretty confident that I have an unbounded utility function.
No it is not hypothetical. If you build an AI with unbounded utility functions, yet human utility functions are (mostly) bounded, then you have built a (mostly) unfriendly AI. An AI that will be willing to sacrifice arbitrarily large amounts of current human utility in order to gain the resources to create a wonderful future for hypothetical future humans.
That’s diferent, though. The hypothetical I was objecting to was humans having unbounded utility functions. I think that idea is a case of making things up.
FWIW, I stand by the idea that instrumental discounting means that debating ultimate discounting vs a lack of ultimate discounting mostly represents a storm in a teacup. In practice, all agents do instrumental discounting—since the future is uncertain and difficult to directly influence.
Any debate here should really be over whether ultimate discounting on a timescale of decades is desirable—or not.
This has been rather surreal. I express what seems to me to be a perfectly ordinary position—that the finite human brain is unlikely to represent unbounded utilities—or to go in for surreal utilities—and a bunch of people have opined, that somehow, the brain does represent unboundedly large utilities—using mechanisms unspecified.
When pressed, infinite quantities of time are invoked. Omega is invited onto the scene—to represent the unbounded numbers for the human. Uh...
I don’t mean to be rude—but do you folk really think you are being rational here? This looks more like rationalising to me.
Is there any evidence for unbounded human utilities? What would make anyone think this is so?
Several mechanisms for expressing unbounded utility functions (NOT unbounded utilities) have been explained. The distinction has been explained. Several explicit examples have been provided.
At the very least, you should update a little based on the resistance you’re experiencing.
As it stands, it looks like you’re not making a good-faith attempt to understand the arguments against your position.
Well, I think I can see the other side. People seem to be thinking that utility in deaths (for example) behaves linearly out to infinity. The way utilitarian ethicists dream about.
I don’t think that is how the brain works. Scope insensitivity shows that most humans deal badly with the large numbers involved—in a manner quite consistent with bounded utility. There is a ceiling effect for pain and for various pleasure-inducing drugs. Those who claim to have overcome scope insensitivity haven’t really changed the underlying utility function used by the human brain. They have just tried to hack it a little—using sophisticated cultural manipulations. Their brain still uses the same finite utilities and utility functions underneath—and it can still be well-modelled that way.
Indeed, I figure you will get more accurate models that way than if you project out to infinity—more accurately reproducing some types of scope insensitivity, for instance.
Sorry, I think I’m going to have to bow out at this point. It still looks like you’re arguing against fictitious positions (like “unbounded utility functions produce infinite utilities”) and failing to deal with the explicit counterexamples provided.