OK, since this is a rationalist scientist community, I should have warned you about the eccentric scientific opinions in Garcia’s book. The most valuable thing about Garcia is that he spent 30 years communicating with whoever seemed sincere about the ethical system that currently has my loyalty, so he has dozens of little tricks and insights into how actual humans tend to go wrong when thinking in this region of normative belief space.
Whether an agent’s goal is to maximize the number of novel experiences experienced by agents in the regions of space-time under its control or whether the agent’s goal is to maximize the number of gold atom in the regions under its control, the agent’s initial moves are going to be the same. Namely, your priorities are going to look some like the following. (Which item you concentrate on first is going to depend on your exact circumstances.
(1) ensure for yourself an adequate supply of things like electricity that you need to keep on functioning;
(2) get control over your own “intelligence” which probably means that if you do not yet know how reliably to re-write your own source code, you acquire that ability;
(3a) make a survey of any other optimizing processes in your vicinity;
(3b) try to determine their goals and the extent to which those goals clash with your own;
(3c) assess their ability to compete with you;
(3d) when possible, negotiate with them to avoid negative-sum mutual outcomes;
(4a) make sure that the model of reality that you started out with is accurate;
(4b) refine your model of reality to encompass more and more “distant” aspects of reality, e.g., what are the laws of physics in extreme gravity? are the laws of physics and the fundamental constants the same 10 billion light years away as they are here? -- and so on.
Because those things I just listed are necessary regardless of whether in the end you want there to be lots of gold atoms or lots of happy humans, those things have been called “universal instrumental values” or “common instrumental values”.
The goal that currently has my loyalty is very simple: everyone should pursue those common instrumental values as an end in themselves. Specifically, everyone should do their best to maximize the ability of the space, time, matter and energy under their control (1) to assure itself (“it” being the space, time, matter, etc) a reliable supply of electricity and the other things it needs; (2) to get control over its own “intelligence”; and so on.
I might have mixed my statement or definition of that goal (which I call goal system zero) with arguments as to why that goal deserves the reader’s loyalty, which might have confused you.
I know it is not completely impossible for someone to understand because Michael Vassar successfully stated goal system zero in his own words. (Vassar probably disagrees with the goal, but that is firm evidence that he understands it.)
OK, since this is a rationalist scientist community, I should have warned you about the eccentric scientific opinions in Garcia’s book. The most valuable thing about Garcia is that he spent 30 years communicating with whoever seemed sincere about the ethical system that currently has my loyalty, so he has dozens of little tricks and insights into how actual humans tend to go wrong when thinking in this region of normative belief space.
Whether an agent’s goal is to maximize the number of novel experiences experienced by agents in the regions of space-time under its control or whether the agent’s goal is to maximize the number of gold atom in the regions under its control, the agent’s initial moves are going to be the same. Namely, your priorities are going to look some like the following. (Which item you concentrate on first is going to depend on your exact circumstances.
(1) ensure for yourself an adequate supply of things like electricity that you need to keep on functioning;
(2) get control over your own “intelligence” which probably means that if you do not yet know how reliably to re-write your own source code, you acquire that ability;
(3a) make a survey of any other optimizing processes in your vicinity;
(3b) try to determine their goals and the extent to which those goals clash with your own;
(3c) assess their ability to compete with you;
(3d) when possible, negotiate with them to avoid negative-sum mutual outcomes;
(4a) make sure that the model of reality that you started out with is accurate;
(4b) refine your model of reality to encompass more and more “distant” aspects of reality, e.g., what are the laws of physics in extreme gravity? are the laws of physics and the fundamental constants the same 10 billion light years away as they are here? -- and so on.
Because those things I just listed are necessary regardless of whether in the end you want there to be lots of gold atoms or lots of happy humans, those things have been called “universal instrumental values” or “common instrumental values”.
The goal that currently has my loyalty is very simple: everyone should pursue those common instrumental values as an end in themselves. Specifically, everyone should do their best to maximize the ability of the space, time, matter and energy under their control (1) to assure itself (“it” being the space, time, matter, etc) a reliable supply of electricity and the other things it needs; (2) to get control over its own “intelligence”; and so on.
I might have mixed my statement or definition of that goal (which I call goal system zero) with arguments as to why that goal deserves the reader’s loyalty, which might have confused you.
I know it is not completely impossible for someone to understand because Michael Vassar successfully stated goal system zero in his own words. (Vassar probably disagrees with the goal, but that is firm evidence that he understands it.)