We might want to keep our AI from learning a certain fact about the world, like particular cognitive biases humans have that could be used for manipulation. But a sufficiently intelligent agent might discover this fact despite our best efforts. Is it possible to find out when it does this through monitoring, and trigger some circuit breaker?
Evals can measure the agent’s propensity for catastrophic behavior, and mechanistic anomaly detection hopes to do better by looking at the agent’s internals without assuming interpretability, but if we can measure the agent’s beliefs, we can catch the problem earlier. Maybe there can be more specific evals we give to the agent, which are puzzles that can only be solved if the agent knows some particular fact. Or maybe the agent is factorable into a world-model and planner, and we can extract whether it knows the fact from the world-model.
Have the situational awareness people already thought about this? Does anything change when we’re actively trying to erase a belief?
mechanistic anomaly detection hopes to do better by looking at the agent’s internals as a black box
Do you mean “black box” in the sense that MAD does not assume interpretability of the agent? If so this is kinda confusing as “black box” is often used in contrast to “white box”, ie “black box” means you have no access to model internals, just inputs+outputs (which wouldn’t make sense in your context)
We might want to keep our AI from learning a certain fact about the world, like particular cognitive biases humans have that could be used for manipulation. But a sufficiently intelligent agent might discover this fact despite our best efforts. Is it possible to find out when it does this through monitoring, and trigger some circuit breaker?
Evals can measure the agent’s propensity for catastrophic behavior, and mechanistic anomaly detection hopes to do better by looking at the agent’s internals without assuming interpretability, but if we can measure the agent’s beliefs, we can catch the problem earlier. Maybe there can be more specific evals we give to the agent, which are puzzles that can only be solved if the agent knows some particular fact. Or maybe the agent is factorable into a world-model and planner, and we can extract whether it knows the fact from the world-model.
Have the situational awareness people already thought about this? Does anything change when we’re actively trying to erase a belief?
Do you mean “black box” in the sense that MAD does not assume interpretability of the agent? If so this is kinda confusing as “black box” is often used in contrast to “white box”, ie “black box” means you have no access to model internals, just inputs+outputs (which wouldn’t make sense in your context)
Yes, changed the wording