This is one of the answers: https://www.alignmentforum.org/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation
Ramana Kumar
The trick is that for some of the optimisations, a mind is not necessary. There is a sense perhaps in which the whole history of the universe (or life on earth, or evolution, or whatever is appropriate) will become implicated for some questions, though.
I think https://www.alignmentforum.org/posts/TATWqHvxKEpL34yKz/intelligence-or-evolution is somewhat related in case you haven’t seen it.
I’ll add $500 to the pot.
Interesting—it’s not so obvious to me that it’s safe. Maybe it is because avoiding POUDA is such a low bar. But the sped up human can do the reflection thing, and plausibly with enough speed up can be superintelligent wrt everyone else.
A possibly helpful—because starker—hypothetical training approach you could try for thinking about these arguments is make an instance of the imitatee that has all their (at least cognitive) actions sped up by some large factor (e.g. 100x), e.g., via brain emulation (or just “by magic” for the purpose of the hypothetical).
It means f(x) = 1 is true for some particular x’s, e.g., f(x_1) = 1 and f(x_2) = 1, there are distinct mechanisms for why f(x_1) = 1 compared to why f(x_2) = 1, and there’s no efficient discriminator that can take two instances f(x_1) = 1 and f(x_2) = 1 and tell you whether they are due to the same mechanism or not.
Will the discussion be recorded?
(Bold direct claims, not super confident—criticism welcome.)
The approach to ELK in this post is unfalsifiable.
A counterexample to the approach would need to be a test-time situation in which:
The predictor correctly predicts a safe-looking diamond.
The predictor “knows” that the diamond is unsafe.
The usual “explanation” (e.g., heuristic argument) for safe-looking-diamond predictions on the training data applies.
Points 2 and 3 are in direct conflict: the predictor knowing that the diamond is unsafe rules out the usual explanation for the safe-looking predictions.
So now I’m unclear what progress has been made. This looks like simply defining “the predictor knows P” as “there is a mechanistic explanation of the outputs starting from an assumption of P in the predictor’s world model”, then declaring ELK solved by noting we can search over and compare mechanistic explanations.
I think you’re right—thanks for this! It makes sense now that I recognise the quote was in a section titled “Alignment research can only be done by AI systems that are too dangerous to run”.
“We can compute the probability that a cell is alive at timestep 1 if each of it and each of its 8 neighbors is alive independently with probability 10% at timestep 0.”
we the readers (or I guess specifically the heuristic argument itself) can do this, but the “scientists” cannot, because the
“scientists don’t know how the game of life works”.
Do the scientists ever need to know how the game of life works, or can the heuristic arguments they find remain entirely opaque?
Another thing confusing to me along these lines:
“for example they may have noticed that A-B patterns are more likely when there are fewer live cells in the area of A and B”
where do they (the scientists) notice these fewer live cells? Do they have some deep interpretability technique for examining the generative model and “seeing” its grid of cells?
They have a strong belief that in order to do good alignment research, you need to be good at “consequentialist reasoning,” i.e. model-based planning, that allows creatively figuring out paths to achieve goals.
I think this is a misunderstanding, and that approximately zero MIRI-adjacent researchers hold this belief (that good alignment research must be the product of good consequentialist reasoning). What seems more true to me is that they believe that better understanding consequentialist reasoning—e.g., where to expect it to be instantiated, what form it takes, how/why it “works”—is potentially highly relevant to alignment.
I’m focusing on the code in Appendix B.
What happens when
self.diamondShard
’s assessment of whether some consequences contain diamonds differs from ours? (Assume the agent’s world model is especially good.)
upweights actions and plans that lead to
how is it determined what the actions and plans lead to?
We expect an explanation in terms of the weights of the model and the properties of the input distribution.
We have a model that predicts a very specific pattern of observations, corresponding to “the diamond remains in the vault.” We have a mechanistic explanation π for how those correlations arise from the structure of the model.
Now suppose we are given a new input on which our model predicts that the diamond will appear to remain in the vault. We’d like to ask: in this case, does the diamond appear to remain in the vault for the normal reason π?
A problem with this: π can explain the predictions on both train and test distributions without all the test inputs corresponding to safe diamonds. In other words, the predictions can be made for the “normal reason” π even when the normal reason of the diamond being safe doesn’t hold.
(elaborating the comment above)
Because π is a mechanistic (as opposed to teleological, or otherwise reference-sensitive) explanation, its connection to what we would like to consider “normal reasons” has been weakened if not outright broken.On the training distribution suppose we have two explanations for the “the diamond remains in the vault” predicted observations.
First there is ɸ, the explanation that there was a diamond in the vault and the cameras were working properly, etc. and the predictor is a straightforward predictor with a human-like world-model (ɸ is kinda loose on the details of how the predictor works, and just says that it does work).
Then there is π, which is an explanation that relies on various details about the circuits implemented by the weights of the predictor that traces abstractly how this distribution of inputs produces outputs with the observed properties, and uses various concepts and abstractions that make sense of the particular organisation of this predictor’s weights. (π is kinda glib about real world diamonds but has plenty to say about how the predictor works, and some of what it says looks like there’s a model of the real world in there.)
We might hope that a lot of the concepts π is dealing in do correspond to natural human things like object permanence or diamonds or photons. But suppose not all of them do, and/or there are some subtle mismatches.
Now on some out-of-distribution inputs that produce the same predictions, we’re in trouble when π is still a good explanation of those predictions but ɸ is not. This could happen because, e.g., π’s version of “object permanence” is just broken on this input, and was never really about object permanence but rather about a particular group of circuits that happen to do something object-permanence-like on the training distribution. Or maybe π refers to the predictor’s alien diamond-like concept that humans wouldn’t agree with if they understood it but does nevertheless explain the prediction of the same observations.
Is it an assumption of your work here (or maybe a desideratum of whatever you find to do mechanistic explanations) that the mechanistic explanation is basically in terms of a world model or simulation engine, and we can tell that’s how it’s structured? I.e., it’s not some arbitrary abstract summary of the predictor’s computation. (And also that we can tell that the world model is good by our lights?)
Partitions (of some underlying set) can be thought of as variables like this:
The number of values the variable can take on is the number of parts in the partition.
Every element of the underlying set has some value for the variable, namely, the part that that element is in.
Another way of looking at it: say we’re thinking of a variable as a function from the underlying set to ’s domain . Then we can equivalently think of as the partition of with (up to) parts.
In what you quoted, we construct the underlying set by taking all possible combinations of values for the “original” variables. Then we take all partitions of that to produce all “possible” variables on that set, which will include the original ones and many more.
I agree with you—and yes we ignore this problem by assuming goal-alignment. I think there’s a lot riding on the pre-SLT model having “beneficial” goals.
I agree with this post. However, I think it’s common amongst ML enthusiasts to eschew specification and defer to statistics on everything. (Or datapoints trying to capture an “I know it when I see it” “specification”.)