In general we don’t have an explicit representation of the human’s beliefs as a Bayes net (and none of our algorithms are specialized to this case), so the only way we are representing “change to Bayes net” is as “information you can give to a human that would lead them to change their predictions.”
That said, we also haven’t described any inference algorithm other than “ask the human.” In general inference is intractable (even in very simple models), and the only handle we have on doing fast+acceptable approximate inference is that the human can apparently do it.
(Though if that was the only problem then we also expect we could find some loss function that incentivizes the AI to do inference in the human Bayes net.)
In general we don’t have an explicit representation of the human’s beliefs as a Bayes net (and none of our algorithms are specialized to this case), so the only way we are representing “change to Bayes net” is as “information you can give to a human that would lead them to change their predictions.”
That said, we also haven’t described any inference algorithm other than “ask the human.” In general inference is intractable (even in very simple models), and the only handle we have on doing fast+acceptable approximate inference is that the human can apparently do it.
(Though if that was the only problem then we also expect we could find some loss function that incentivizes the AI to do inference in the human Bayes net.)