Much of our values and goals, what we want, are culturally induced or the result of our ignorance. Reduce our ignorance and you change our values. One trivial example is our intellectual curiosity. If we don’t need to figure out what we want on our own, our curiosity is impaired.
I don’t follow this one. Is this just making the argument-by-definition that an omniscient being couldn’t be curious? The universe seems to place hard limits on how much computation can be done and storage accessed, so there will always be things a FAI will not know. (I can also appeal to more general principles here: Godel, Turing, Hutter or Legge’s no-elegant-predictor results, etc.)
Take for example an agent that is facing the Prisoner’s dilemma. Such an agent might originally tend to cooperate and only after learning about game theory decide to defect and gain a greater payoff. Was it rational for the agent to learn about game theory, in the sense that it helped the agent to achieve its goal or in the sense that it deleted one of its goals in exchange for a allegedly more “valuable” goal?
Er, what? If the agent isn’t reaping greater payoffs then it was simply mistaken (that happens sometimes) and can go back to not cooperating. If it had defecting as an intrinsically good thing, then why did it ever start cooperating?
It seems to me that becoming more knowledgeable and smarter is gradually altering our utility functions.
If this is the basic point, you’re missing a lot of more germane results than what you put down. Openness and parasite load (or psilocybin), IQ and cooperation and taking normative economics stances (besides the linked cites, Pinker had a ton of relevant stuff in the later chapters of Better Angels), etc.
I don’t follow this one. Is this just making the argument-by-definition that an omniscient being couldn’t be curious?
I think XiXiDu is actually saying that if you model a given human, but with changed context that flows from their inferred values (smarter, more the people we wished we were, etc...) you will wind up with a model of a completely different human whose values are not coherent with those of the source human, because our context is extremely important in determining what we think, know, want, and value.
Indeed? May I suggest reading http://www.wired.com/wiredscience/2011/08/spoilers-dont-spoil-anything/ (PDF) ?
I don’t follow this one. Is this just making the argument-by-definition that an omniscient being couldn’t be curious? The universe seems to place hard limits on how much computation can be done and storage accessed, so there will always be things a FAI will not know. (I can also appeal to more general principles here: Godel, Turing, Hutter or Legge’s no-elegant-predictor results, etc.)
Er, what? If the agent isn’t reaping greater payoffs then it was simply mistaken (that happens sometimes) and can go back to not cooperating. If it had defecting as an intrinsically good thing, then why did it ever start cooperating?
If this is the basic point, you’re missing a lot of more germane results than what you put down. Openness and parasite load (or psilocybin), IQ and cooperation and taking normative economics stances (besides the linked cites, Pinker had a ton of relevant stuff in the later chapters of Better Angels), etc.
I think XiXiDu is actually saying that if you model a given human, but with changed context that flows from their inferred values (smarter, more the people we wished we were, etc...) you will wind up with a model of a completely different human whose values are not coherent with those of the source human, because our context is extremely important in determining what we think, know, want, and value.