Goertzel: Human value has evolved and morphed over time and will continue to do so. It already takes multiple different forms. It will likely evolve in future in coordination with AGI and other technology.
Agree, but the multiple different current forms of human values are the source of much conflict.
Hanson: Like Ben, I think it is ok (if not ideal) if our descendants’ values deviate from ours, as ours have from our ancestors.
Agree again. And in honor of Robin’s profession, I will point out that the multiple current forms of human values are the driving force causing trade, and almost all other economic activity.
Nesov: Change in values of the future agents, however sudden or gradual, means that the Future (the whole freackin’ Future!) won’t be optimized according to our values, won’t be anywhere as good as it could’ve been otherwise. … Regardless of difficulty of the challenge, it’s NOT OK to lose the Future.
Strongly disagree. The future is not ours to lose. A growing population of enfranchised agents is going to be sharing that future with us. We need to discount our own interest in that future for all kinds of reasons in order to achieve some kind of economic sanity. We need to discount because:
We really do care more about the short-term future than the distant future.
We have better control over the short-term future than the distant future.
We expect our values to change. Change can be good. It would be insane to attempt to determine the distant future now. Better to defer decisions about the distant future until later, when that future eventually becomes the short-term future. We will then have a better idea what we want and a better idea how to achieve it.
As mentioned, an increasing immortal population means that our “rights” over the distant future must be fairly dilute.
If we don’t discount the future, we run into mathematical difficulties. The first rule of utilitarianism ought to be KIFS—Keep It Finite, Stupid.
The idea is not really that you care equally about future events—but rather that you don’t care about them to the extent that you are uncertain about them; that you are likely to be unable to influence them; that you will be older when they happen—and so on.
It is like in chess: future moves are given less consideration—but only because they are currently indistinct low probability events—and not because of some kind of other intrinsic temporal discounting of value.
We really do care more about the short-term future than the distant future.
How do you know this? It feels this way, but there is no way to be certain.
We have better control over the short-term future than the distant future.
That we probably can’t have something doesn’t imply we shouldn’t have it.
We expect our values to change. Change can be good.
That we expect something to happen doesn’t imply it’s desirable that it happens. It’s very difficult to arrange so that change in values is good. I expect you’d need oversight from a singleton for that to become possible (and in that case, “changing values” won’t adequately describe what happens, as there are probably better stuff to make than different-valued agents).
As mentioned, an increasing immortal population means that our “rights” over the distant future must be fairly dilute.
Preference is not about “rights”. It’s merely game theory for coordination of satisfaction of preference.
If we don’t discount the future, we run into mathematical difficulties. The first rule of utilitarianism ought to be KIFS—Keep It Finite, Stupid.
God does not care about our mathematical difficulties. --Einstein.
We really do care more about the short-term future than the distant future.
How do you know this? It feels this way, but there is no way to be certain.
Alright. I shouldn’t have said “we”. I care more about the short term. And I am quite certain. WAY!
We have better control over the short-term future than the distant future.
That we probably can’t have something doesn’t imply we shouldn’t have it.
Huh? What is it that you are not convinced we shouldn’t have? Control over the distant future? Well, if that is what you mean, then I have to disagree. We are completely unqualified to exercise that kind of control. We don’t know enough. But there is reason to think that our descendants and/or future selves will be better informed.
God does not care about our mathematical difficulties.
Then lets make sure not to hire the guy as an FAI programmer.
We really do care more about the short-term future than the distant future.
How do you know this? It feels this way, but there is no way to be certain.
Alright. I shouldn’t have said “we”. I care more about the short term. And I am quite certain. WAY!
I believe you know my answer to that. You are not licensed to have absolute knowledge about yourself. There are no human or property rights on truth. How do you know that you care more about short term? You can have beliefs or emotions that suggest this, but you can’t know what all the stuff you believe and all the moral arguments you respond to cash out into on reflection. We only ever know approximate answers, and given the complexity of human decision problem and sheer inadequacy of human brains, any approximate answers we do presume to know are highly suspect.
Huh? What is it that you are not convinced we shouldn’t have? Control over the distant future? Well, if that is what you mean, then I have to disagree. We are completely unqualified to exercise that kind of control. We don’t know enough. But there is reason to think that our descendants and/or future selves will be better informed.
That we aren’t qualified doesn’t mean that we shouldn’t have that control. Exercising this control through decisions made with human brains is probably not it of course, we’d have to use finer tools, such as FAI or upload bureaucracies.
God does not care about our mathematical difficulties.
Then lets make sure not to hire the guy as an FAI programmer.
Don’t joke, it’s serious business. What do you believe on the matter?
God does not care about our mathematical difficulties.
Then lets make sure not to hire the guy as an FAI programmer.
Don’t joke, it’s serious business. What do you believe on the matter?
I am not the person who initiated this joke. Why did you mention God? If you don’t care for discounting, what is your solution to the very standard puzzles regarding unbounded utilities and infinitely remote planning horizons?
I am not the person who initiated this joke. Why did you mention God?
Einstein mentioned God, as a stand-in for Nature.
If you don’t care for discounting, what is your solution to the very standard puzzles regarding unbounded utilities and infinitely remote planning horizons?
I didn’t say I don’t care for discounting. I said that I believe that we must be uncertain about this question. That I don’t have solutions doesn’t mean I must discard the questions as answered negatively.
We are completely unqualified to exercise that kind of control. We don’t know enough. But there is reason to think that our descendants and/or future selves will be better informed.
Yes. So, for “our values”, read “our extrapolated volition”.
It’s not clear to me how much you and Nesov actually disagree about “changing” values, vs. you meaning by “change” the sort of reflective refinement that CEV is supposed to incorporate, while Nesov uses it to mean non-reflectively-guided (random, evolutionary, or whatever) change.
I do not mean “reflective refinement” if that refinement is expected to take place during a FOOM that happens within the next century or two. I expect values to change after the first superhuman AI comes into existence. They will inevitably change by some small epsilon each time a new physical human is born or an uploaded human is cloned. I want them to change. The “values of mankind” are something like the musical tastes of mankind or the genome of mankind. It is a collage of divergent things, and the set of participants in that collage continues to change.
VN and I are in real disagreement, as far as I can tell.
This is not a disagreement, but failure of communication. There is no one relevant sentence in this dispute which we both agree that we understand in the same sense, and whose truth value we assign differently.
It is a complete failure of communication if you are under the impression that the dispute has anything to do with the truth values of sentences. I am under the impression that we are in dispute because we have different values—different aspirations for the future.
It is a complete failure of communication if you are under the impression that the dispute has anything to do with the truth values of sentences. I am under the impression that we are in dispute because we have different values—different aspirations for the future.
Any adequate disagreement must be about different assignment of truth values to the same meaning. For example, I disagree with the truth of the statement that we don’t converge on agreement because of differences in our values, given both yours and mine preferred interpretation of “values”. But explaining the reason for this condition not being the source of our disagreement requires me to explain to you my sense of “values”, the normative and not factual one, which I fail to accomplish.
Any adequate disagreement must be about different assignment of truth values to the same meaning.
I think we are probably in agreement that we ought to mean the same thing by the words we use before our disagreement has any substance. But your mention of “truth values” here may be driving us into a diversion from the main issue. Because I maintain that simple “ought” sentences do not have truth values. Only “is” sentences can be analyzed as true or false in Tarskian semantics.
But that is a diversion. I look forward to your explanation of your sense of the word “value”—a sense which has the curious property (as I understand it) that it would be a tragedy if mankind does not (with AI assistance) soon choose one point (out of a “value space” of rather high dimensionality) and then fix that point for all time as the one true goal of mankind and its creations.
But your mention of “truth values” here may be driving us into a diversion from the main issue.
I gave up on the main issue, and so described my understanding of the reasons that justify giving up.
Because I maintain that simple “ought” sentences do not have truth values. Only “is” sentences can be analyzed as true or false in Tarskian semantics.
Yes, and this is the core of our disagreement. Since your position is that something is meaningless, and mine is that there is a sense behind that, this is a failure of communication and not a true disagreement, as I didn’t manage to communicate to you the sense I see. At this point, I can only refer you to “metaethics sequence”, which I know is not very helpful.
One last attempt, using an intuition/analogy dump not carefully explained.
Where do the objective conclusions about “is” statements come from? Roughly, you encounter new evidence, including logical evidence, and then you look back and decide that your previous understanding could be improved upon. This is the cognitive origin of anything normative: you have a sense of improvement, and expectation of potential improvement. Looking at the same situation from the past, you know that there is a future process that can suggest improvements, you just haven’t experienced this process yet. And so you can reason about the truth without having it immediately available.
If you understand the way previous paragraph explains the truth of “is” questions, you can apply exactly the same explanation to “ought” questions. You can decide in the moment what you prefer, what you choose, which action you perform. But in the future, when you learn more, experience more, you can look back and see that you should’ve chosen differently, that your decision could’ve been improved. This anticipation of possible improvement generates semantics of preference over the decisions that is not logically transparent. You don’t know what you ought to choose, but you know that here is a sense in which some action is preferable to some other action, and you don’t know which is which.
I gave up on the main issue, and so described my understanding of the reasons that justify giving up.
Sorry. I missed that subtext. Giving up may well be the best course.
your position is that something is meaningless, and mine is that there is a sense behind that, this is a failure of communication.
But my position is not that something (specifically an ‘ought’ statement) is meaningless. I only maintain that the meaning is not attained by assigning “truth value conditions”.
One last attempt …
Your attempt was a step in the right direction, but still IMO still leaves a large gap in understanding. You seem to think that anyone who thinks carefully enough will agree with you that there is some set of core meta-ethical principles that acts as an attractor in a dynamic process of reflective updating.
I disagree with this. There is no core attractor, and the dynamic process is not one of better and better thinking as time goes on. Instead, the dynamics I am talking about is the biological evolutionary process which results in a change over time in the typical human brain. That plus the technological change over time which is likely to bring uploaded humans, AIs, aliens, and “uplifted” non-human animals into our collective social contract.
You seem to think that anyone who thinks carefully enough will agree with you that there is some set of core meta-ethical principles that acts as an attractor in a dynamic process of reflective updating. I disagree with this. There is no core attractor, and the dynamic process is not one of -better and better thinking as time goes on.
How can we know whether that is true or not? If we had access to multiple mature alien races, and could examine their moral systems, that might be a reasonable conclusion—if they were all very different. However, until then, the moral systems we can see are primitive—and any such conclusions would seem to be premature.
It’s very difficult to arrange so that change in values is good. I expect you’d need oversight from a singleton for that to become possible (and in that case, “changing values” won’t adequately describe what happens, as there are probably better stuff to make than different-valued agents).
We do seem to have an example of systematic positive change in values—the history of the last thousand years. No doubt some will argue that our values only look “good” because they are closest to our current values—but I don’t think that is true. Another possible explanation is that material wealth lets us show off our more positive values more frequently. That’s a harder charge to defend against, but wealth-driven value changes are surely still value changes.
Systematic, positive changes in values tend to suggest a bright future. Go, cultural evolution!
If we don’t discount the future, we run into mathematical difficulties. The first rule of utilitarianism ought to be KIFS—Keep It Finite, Stupid.
Too much discounting runs into problems with screwing the future up, to enjoy short-term benefits. With 5-year political horizons, that problem seems far more immediate and pressing than the problems posed by discounting too little. From the point of view of those fighting the evils that too much temporal discounting represents, arguments about mathematical infinity seem ridiculous and useless. Since such arguments are so feeble, why even bother mentioning them?
I agree, but be careful with “We expect our values to change. Change can be good.” Dutifully explain, that you are not talking about value change in the mathematical sense, but about value creation, i.e. extending valuation to novel situations that is guided by values of a meta-level with respect to values casually applied to remotely similar familiar situations.
I think that we need to be able to change our minds about fundamental values, just as we need to be able to change our minds about fundamental beliefs. Even if we don’t currently know how to handle this kind of upheaval mathematically.
If that is seen as a problem, then we better get started working on building better mathematics.
OK. I’ve been sympathetic with your view from the beginning, but haven’t really thought through (so, thanks,) the formalization that puts values on epistemic level: distribution of believes over propositions “my-value (H, X)” where H is my history up to now and X is a preference (order over world states, which include me and my actions). But note that people here will call the very logic you use to derive such distributions your value system.
ETA: obviously, distribution “my-value (H1, X[H2])”, where “X[H2]” is the subset of worlds where my history turns out to be “H2”, can differ greatly from “my-value (H2, X[H2])”, due to all sorts of things, but primarily due to computational constraints (i.e. I think the formalism would see it as computational constraints).
ETA P.S.: let’s say for clarity, that I meant “X[H2]” is the subset of world-histories where my history has prefix “H2″.
I think that we need to be able to change our minds about fundamental values, just as we need to be able to change our minds about fundamental beliefs. Even if we don’t currently know how to handle this kind of upheaval mathematically.
What we may need more urgently is the maths for agents who have “got religion”—because we may want to build that type of agent—to help to ensure that we continue to receive their prayers and supplications.
Agree, but the multiple different current forms of human values are the source of much conflict.
Agree again. And in honor of Robin’s profession, I will point out that the multiple current forms of human values are the driving force causing trade, and almost all other economic activity.
Strongly disagree. The future is not ours to lose. A growing population of enfranchised agents is going to be sharing that future with us. We need to discount our own interest in that future for all kinds of reasons in order to achieve some kind of economic sanity. We need to discount because:
We really do care more about the short-term future than the distant future.
We have better control over the short-term future than the distant future.
We expect our values to change. Change can be good. It would be insane to attempt to determine the distant future now. Better to defer decisions about the distant future until later, when that future eventually becomes the short-term future. We will then have a better idea what we want and a better idea how to achieve it.
As mentioned, an increasing immortal population means that our “rights” over the distant future must be fairly dilute.
If we don’t discount the future, we run into mathematical difficulties. The first rule of utilitarianism ought to be KIFS—Keep It Finite, Stupid.
http://lesswrong.com/lw/n2/against_discount_rates/
The idea is not really that you care equally about future events—but rather that you don’t care about them to the extent that you are uncertain about them; that you are likely to be unable to influence them; that you will be older when they happen—and so on.
It is like in chess: future moves are given less consideration—but only because they are currently indistinct low probability events—and not because of some kind of other intrinsic temporal discounting of value.
How do you know this? It feels this way, but there is no way to be certain.
That we probably can’t have something doesn’t imply we shouldn’t have it.
That we expect something to happen doesn’t imply it’s desirable that it happens. It’s very difficult to arrange so that change in values is good. I expect you’d need oversight from a singleton for that to become possible (and in that case, “changing values” won’t adequately describe what happens, as there are probably better stuff to make than different-valued agents).
Preference is not about “rights”. It’s merely game theory for coordination of satisfaction of preference.
God does not care about our mathematical difficulties. --Einstein.
Alright. I shouldn’t have said “we”. I care more about the short term. And I am quite certain. WAY!
Huh? What is it that you are not convinced we shouldn’t have? Control over the distant future? Well, if that is what you mean, then I have to disagree. We are completely unqualified to exercise that kind of control. We don’t know enough. But there is reason to think that our descendants and/or future selves will be better informed.
Then lets make sure not to hire the guy as an FAI programmer.
I believe you know my answer to that. You are not licensed to have absolute knowledge about yourself. There are no human or property rights on truth. How do you know that you care more about short term? You can have beliefs or emotions that suggest this, but you can’t know what all the stuff you believe and all the moral arguments you respond to cash out into on reflection. We only ever know approximate answers, and given the complexity of human decision problem and sheer inadequacy of human brains, any approximate answers we do presume to know are highly suspect.
That we aren’t qualified doesn’t mean that we shouldn’t have that control. Exercising this control through decisions made with human brains is probably not it of course, we’d have to use finer tools, such as FAI or upload bureaucracies.
Don’t joke, it’s serious business. What do you believe on the matter?
I am not the person who initiated this joke. Why did you mention God? If you don’t care for discounting, what is your solution to the very standard puzzles regarding unbounded utilities and infinitely remote planning horizons?
Einstein mentioned God, as a stand-in for Nature.
I didn’t say I don’t care for discounting. I said that I believe that we must be uncertain about this question. That I don’t have solutions doesn’t mean I must discard the questions as answered negatively.
Yes. So, for “our values”, read “our extrapolated volition”.
It’s not clear to me how much you and Nesov actually disagree about “changing” values, vs. you meaning by “change” the sort of reflective refinement that CEV is supposed to incorporate, while Nesov uses it to mean non-reflectively-guided (random, evolutionary, or whatever) change.
I do not mean “reflective refinement” if that refinement is expected to take place during a FOOM that happens within the next century or two. I expect values to change after the first superhuman AI comes into existence. They will inevitably change by some small epsilon each time a new physical human is born or an uploaded human is cloned. I want them to change. The “values of mankind” are something like the musical tastes of mankind or the genome of mankind. It is a collage of divergent things, and the set of participants in that collage continues to change.
VN and I are in real disagreement, as far as I can tell.
This is not a disagreement, but failure of communication. There is no one relevant sentence in this dispute which we both agree that we understand in the same sense, and whose truth value we assign differently.
It is a complete failure of communication if you are under the impression that the dispute has anything to do with the truth values of sentences. I am under the impression that we are in dispute because we have different values—different aspirations for the future.
Any adequate disagreement must be about different assignment of truth values to the same meaning. For example, I disagree with the truth of the statement that we don’t converge on agreement because of differences in our values, given both yours and mine preferred interpretation of “values”. But explaining the reason for this condition not being the source of our disagreement requires me to explain to you my sense of “values”, the normative and not factual one, which I fail to accomplish.
I think we are probably in agreement that we ought to mean the same thing by the words we use before our disagreement has any substance. But your mention of “truth values” here may be driving us into a diversion from the main issue. Because I maintain that simple “ought” sentences do not have truth values. Only “is” sentences can be analyzed as true or false in Tarskian semantics.
But that is a diversion. I look forward to your explanation of your sense of the word “value”—a sense which has the curious property (as I understand it) that it would be a tragedy if mankind does not (with AI assistance) soon choose one point (out of a “value space” of rather high dimensionality) and then fix that point for all time as the one true goal of mankind and its creations.
I gave up on the main issue, and so described my understanding of the reasons that justify giving up.
Yes, and this is the core of our disagreement. Since your position is that something is meaningless, and mine is that there is a sense behind that, this is a failure of communication and not a true disagreement, as I didn’t manage to communicate to you the sense I see. At this point, I can only refer you to “metaethics sequence”, which I know is not very helpful.
One last attempt, using an intuition/analogy dump not carefully explained.
Where do the objective conclusions about “is” statements come from? Roughly, you encounter new evidence, including logical evidence, and then you look back and decide that your previous understanding could be improved upon. This is the cognitive origin of anything normative: you have a sense of improvement, and expectation of potential improvement. Looking at the same situation from the past, you know that there is a future process that can suggest improvements, you just haven’t experienced this process yet. And so you can reason about the truth without having it immediately available.
If you understand the way previous paragraph explains the truth of “is” questions, you can apply exactly the same explanation to “ought” questions. You can decide in the moment what you prefer, what you choose, which action you perform. But in the future, when you learn more, experience more, you can look back and see that you should’ve chosen differently, that your decision could’ve been improved. This anticipation of possible improvement generates semantics of preference over the decisions that is not logically transparent. You don’t know what you ought to choose, but you know that here is a sense in which some action is preferable to some other action, and you don’t know which is which.
Sorry. I missed that subtext. Giving up may well be the best course.
But my position is not that something (specifically an ‘ought’ statement) is meaningless. I only maintain that the meaning is not attained by assigning “truth value conditions”.
Your attempt was a step in the right direction, but still IMO still leaves a large gap in understanding. You seem to think that anyone who thinks carefully enough will agree with you that there is some set of core meta-ethical principles that acts as an attractor in a dynamic process of reflective updating.
I disagree with this. There is no core attractor, and the dynamic process is not one of better and better thinking as time goes on. Instead, the dynamics I am talking about is the biological evolutionary process which results in a change over time in the typical human brain. That plus the technological change over time which is likely to bring uploaded humans, AIs, aliens, and “uplifted” non-human animals into our collective social contract.
How can we know whether that is true or not? If we had access to multiple mature alien races, and could examine their moral systems, that might be a reasonable conclusion—if they were all very different. However, until then, the moral systems we can see are primitive—and any such conclusions would seem to be premature.
I’m sorry. I don’t know which statement you mean to designate by “that”.
Nor do I know which conclusions you worry might be premature.
To the best of my knowledge, I did not draw any conclusions.
We do seem to have an example of systematic positive change in values—the history of the last thousand years. No doubt some will argue that our values only look “good” because they are closest to our current values—but I don’t think that is true. Another possible explanation is that material wealth lets us show off our more positive values more frequently. That’s a harder charge to defend against, but wealth-driven value changes are surely still value changes.
Systematic, positive changes in values tend to suggest a bright future. Go, cultural evolution!
Too much discounting runs into problems with screwing the future up, to enjoy short-term benefits. With 5-year political horizons, that problem seems far more immediate and pressing than the problems posed by discounting too little. From the point of view of those fighting the evils that too much temporal discounting represents, arguments about mathematical infinity seem ridiculous and useless. Since such arguments are so feeble, why even bother mentioning them?
I agree, but be careful with “We expect our values to change. Change can be good.” Dutifully explain, that you are not talking about value change in the mathematical sense, but about value creation, i.e. extending valuation to novel situations that is guided by values of a meta-level with respect to values casually applied to remotely similar familiar situations.
I beseech you, in the bowels of Christ, think it possible your fundamental values may be mistaken.
I think that we need to be able to change our minds about fundamental values, just as we need to be able to change our minds about fundamental beliefs. Even if we don’t currently know how to handle this kind of upheaval mathematically.
If that is seen as a problem, then we better get started working on building better mathematics.
OK. I’ve been sympathetic with your view from the beginning, but haven’t really thought through (so, thanks,) the formalization that puts values on epistemic level: distribution of believes over propositions “my-value (H, X)” where H is my history up to now and X is a preference (order over world states, which include me and my actions). But note that people here will call the very logic you use to derive such distributions your value system.
ETA: obviously, distribution “my-value (H1, X[H2])”, where “X[H2]” is the subset of worlds where my history turns out to be “H2”, can differ greatly from “my-value (H2, X[H2])”, due to all sorts of things, but primarily due to computational constraints (i.e. I think the formalism would see it as computational constraints).
ETA P.S.: let’s say for clarity, that I meant “X[H2]” is the subset of world-histories where my history has prefix “H2″.
What we may need more urgently is the maths for agents who have “got religion”—because we may want to build that type of agent—to help to ensure that we continue to receive their prayers and supplications.