It seems like the only thing stopping z from primarily containing object-level knowledge about the world is the human prior about the unlikelihood of object-level knowledge. But humans are really bad at assigning priors even to relatively simple statements—this is the main reason that we need science.
Agree that humans are not necessarily great at assigning priors. The main response to this is that we don’t have a way to get better priors than an amplified human’s best prior. If amplified humans think the NN prior is better than their prior, they can always just use this prior. So in theory this should be both strictly better than the alternative, and the best possible prior we can use.
Science seems like it’s about collecting more data and measuring the likelihood, not changing the prior. We still need to use our prior—there are infinite scientific theories that fit the data, but we prefer ones that are simple and elegant.
z will consist of a large number of claims, but I have no idea how to assign a prior to the conjunction of many big claims about the world, even in theory. That prior can’t calculated recursively, because there may be arbitrarily-complicated interactions between different components of z.
One thing that helps a bit here is that we can use an amplified human. We also don’t need the human to calculate the prior directly, just to do things like assess whether some change makes the prior better or worse. But I’m not sure how much of a roadblock this is in practice, or what Paul thinks about this problem.
Consider the following proposal: “train an oracle to predict the future, along with an explanation of its reasoning. Reward it for predicting correctly, and penalise it for explanations that sound fishy”. Is there an important difference between this and imitative generalisation?
Yeah, the important difference is that in this case there’s nothing that constrains the explanations to be the same as the actual reasoning the oracle is using, so the explanations you’re getting are not necessarily predictive of the kind of generalisation that will happen. In IG it’s important that the quality of z is measured by having humans use it to make predictions.
An agent can “generalise badly” because it’s not very robust, or because it’s actively pursuing goals that are misaligned with those of humans. It doesn’t seem like this proposal distinguishes between these types of failures. Is this distinction important in motivating the proposal?
I’m not sure exactly what you’re asking. I think the proposal is motivated by something like: having the task be IID/being able to check arbitrary outputs from our model to make sure it’s generalising correctly buys us a lot of safety properties. If we have this guarantee, we only have to worry about rare or probabilistic defection, not that the model might be giving us misleading answers for every question we can’t check.
Agree that humans are not necessarily great at assigning priors. The main response to this is that we don’t have a way to get better priors than an amplified human’s best prior. If amplified humans think the NN prior is better than their prior, they can always just use this prior. So in theory this should be both strictly better than the alternative, and the best possible prior we can use.
Science seems like it’s about collecting more data and measuring the likelihood, not changing the prior. We still need to use our prior—there are infinite scientific theories that fit the data, but we prefer ones that are simple and elegant.
One thing that helps a bit here is that we can use an amplified human. We also don’t need the human to calculate the prior directly, just to do things like assess whether some change makes the prior better or worse. But I’m not sure how much of a roadblock this is in practice, or what Paul thinks about this problem.
Yeah, the important difference is that in this case there’s nothing that constrains the explanations to be the same as the actual reasoning the oracle is using, so the explanations you’re getting are not necessarily predictive of the kind of generalisation that will happen. In IG it’s important that the quality of z is measured by having humans use it to make predictions.
I’m not sure exactly what you’re asking. I think the proposal is motivated by something like: having the task be IID/being able to check arbitrary outputs from our model to make sure it’s generalising correctly buys us a lot of safety properties. If we have this guarantee, we only have to worry about rare or probabilistic defection, not that the model might be giving us misleading answers for every question we can’t check.