“I talk about consequentialists, but not rational consequentialists”, ok this was not the impression I was getting.
Chris van Merwijk
What kinds of algorithms do multi-human imitators learn?
Are human imitators superhuman models with explicit constraints on capabilities?
Reading this post a while after it was written: I’m not going to respond to the main claim (which seems quite likely) but just to the specific arguments, which seems suspicious to me. Here are some points:
In my model of the standard debate setup with human judge, the human can just use both answers in whichever way it wants, independently of which it selects as the correct answer. The fact that one answer provides more useful information than “2+2=?” doesn’t imply a “direct” incentive for the human judge to select that as the correct answer. Upon introspection, I myself would probably say that “4” is the correct answer, while still being very interested in the other answer (the answer on AI risk). I don’t think you disagreed with this?
At a later point you say that the real reason for why the judge would nevertheless select the QIA as the correct answer is that the judge wants to train the system to do useful things. You seem to say that a rational consequentialist would make this decision. Then at a later point you say that this is probably/plausibly (?) a bad thing: “Is this definitely undesirable? I’m not sure, but probably”. But if it really is a bad thing and we can know this, then surely a rational judge would know this, and could just decide not to do it? If you were the judge, would you select the QIA, despite it being “probably undesirable”?
Given that we are talking about optimal play and the human judge is in fact not rational/safe, the debater could manipulate the judge, and so the previous argument doesn’t in fact imply that judges won’t select QIA’s. The debater could deceive and manipulate the judge into (incorrectly) thinking that it should select the QIA, even if you/we currently believe that this would be bad. I agree this kind of deception would probably happen in optimal play (if that is indeed what you meant), but it relies on the judge being irrational or manipulable, not on some argument that “it is rational for a consequentialist judge to select answers with the highest information value”.
It seems to me that either we think there is no problem with selecting QIA’s as answers, or we think that human judges will be irrational and manipulated, but I don’t see the justification in this post for saying “rational consequentialist judges will select QIA’s AND this is probably bad”.
yes, but I think your reasoning “If 2 is only talking about the map, it doesn’t imply 3” is too vague. I’d rather not go into it though, because I am currently busy with other things, so I’d suggest letting the reader decide.
Edit: reading back my response, it might come accross as a bit rude. If so, sorry for that, I didn’t mean it that way.
I think this is too vague, but I will drop this discussion and let the reader decide.
“But without the premise that the territory is maths, the rest of the paradox doesn’t follow.”
I explicitly said “mathematically describable” implying I am not identifying the theory with reality. Nothing in my “argument” makes this identification
If an object knows that it exists, then this implies that it actually exists. Moreover, assuming that the state of a brain is a mathematical fact about the mathematical theory, then that the object knows it exists is in principle a mathematical implication of the mathematical theory (if observation 2 is correct). Hence it would be an implication of the theory that that theory describes an existing reality.
Basically, yes.
“There may also be mathematical properties that are universe-specific (the best candidates here are natural constants), but the extent to which these exist is questionable”
The exact position of every atom in the universe at time t=10^10 years is a “mathematical property of our universe” in my terminology. The fact that some human somewhere uttered the words “good morning” at some point today, is a complicated mathematical property of our universe, in principle derivable from the fundamental theory of physics.
A paradox of existence
tangential comment: Regarding “I will define success as producing fission weapons before the end of war in Europe”. I’m not sure if this is the right criterion for success for the purpose of analogizing to AGI. It seems to me that “producing fission weapons before an Axis power does” is more appropriate.
And this seems overwhelmingly the case, yes: “theory of atomic bomb was considerably more advanced at the beginning of Manhattan project compared to our understanding of theory of aligned AGI”
I’m not sure I understand the motivation behind question. How much of my modern knowledge am I supposed to throw away? Note I am not in fact an atomic theorist who has the state of knowledge of atomic theory in 1942 so it’s hard to know what I’d think, but I can imagine assigning somewhere between 5% and 95% depending on how informed of an atomic theorist I actually was and what it was actually like in 1942. Maybe I could give a better answer if you clarify the motivation behind the question?
Manhattan project for aligned AI
“I tend to think that learning and following the norms of a particular culture (further discussion) isn’t too hard a problem for an AGI which is motivated to do so”. If the AGI is motivated to do so then the value learning problem is already solved and nothing else matters (in particular my post becomes irrelevant), because indeed it can learn the further details in whichever way it wants. We somehow already managed to create an agent with an internal objective that points to Bedouin culture (human values), which is the whole/complete problem.
I could say more about the rest of your comment but just checking if the above changes your model of my model significantly?
Also regarding “I think I’m much more open-minded than you to …”: to be clear, I’m not at all convinced about this I’m open to this distinction not mattering at all. I hope I didn’t come accross as not open minded about this.
Not really a fair characterization I think: 2 mostly seems orthogonal to me (though I probably disagree with your claim. i.e. most important things are passed from previous generations. e.g. children learn that theft is bad, racism is bad etc, all of those things are passed from either parents or other adults. I don’t care a lot about the distinction parents vs other adults/society in this case. I know about the research that parenting has little influence, I don’t want to go into it preferably). 1 seems more relevant. In fact maybe the main reason for me to think this post is irrelevant is that the inductive biases in AI systems will be too different from that of humans (although note, genes still alow for a lot of variability in ethics and so on). But I still think it might be a good idea to keep in mind that “information in the brain about values has a higher risk to not get communicated into the training signal if the method of elliciting that information is not adapted to the way humans normally express the information”, if indeed it is true.
I haven’t specified anything about the algorithms, but they will maybe somehow have to be different. The point is that the format of the human feedback is different. Really this post is about the format in which humans provide feedback rather than about the structure of the AI systems (i.e. a difference in method of generating the training signal rather than a difference in learning algorithm).
The thing underlying the intuition is more something like: We have a method of feedback that humans understand and that works fairly well, and is adapted to the way values are stored in human brains. If we try to have humans give feedback in ways that are not adapted to that, I expect information to be lost. The fact that it “feels natural” is a proxy for “the method of feedback to machines is adapted to the way humans normally give feedback to other humans” without which I am at least concerned about information loss (not claiming it’s inevitable). I don’t inherently care about the “feeling” of naturalness.
Regarding no Safe Natural Intelligence: I agree that there is no such thing, but this is not really a strong argument against? This doesn’t make me somehow suddenly feel comfortable about “unnatural” (I need a better term) methods for humans to provide feedback to AI agents. The fact that there are bad people doesn’t negate the idea that the only source of information about what is good seems to be stored in brains and that we need to extract this information in a way that is adapted to how those brains normally express that information.
Maybe I should have called it “human-adapted methods of human feedback” or something.
Responding to this very late, but: If I recall correctly, Eric has told me in personal conversation that CAIS is a form of AGI, just not agent-like AGI. I suspect Eric would agree broadly with Richard’s definition.