I read two very different things interaction in that scheme. There is authoring ontologies/frames and then there is the issue of mulliganing over valuations. The dive between using (human or outside provided) fancy representation to get a good grip on data which is relative easym is a different beast from being able make a representation where there previously was none. But the backsies issue happens even if we are stuck with a constant representation scheme, what we would naively train for now we would not train or would train the opposite at a possible future date.
Then there is the whole issue if the valuations change because the representations used to derive the values change structure. When you do stuff differently based on whether you do or do not use probabilities in your thinking.
Post AGI the bleeding edge scientific understanding is likely to be based on silicon generated representations. There the yardstick can’t be that the result is correct as the humans are in a worse position to tell what is the correct result. So a big trick / challenge is about recognising/trusting that the machine conclusion should be in societal use when you would not have confidence with a biological brain coming up with a similar claim.
I read two very different things interaction in that scheme. There is authoring ontologies/frames and then there is the issue of mulliganing over valuations. The dive between using (human or outside provided) fancy representation to get a good grip on data which is relative easym is a different beast from being able make a representation where there previously was none. But the backsies issue happens even if we are stuck with a constant representation scheme, what we would naively train for now we would not train or would train the opposite at a possible future date.
Then there is the whole issue if the valuations change because the representations used to derive the values change structure. When you do stuff differently based on whether you do or do not use probabilities in your thinking.
Post AGI the bleeding edge scientific understanding is likely to be based on silicon generated representations. There the yardstick can’t be that the result is correct as the humans are in a worse position to tell what is the correct result. So a big trick / challenge is about recognising/trusting that the machine conclusion should be in societal use when you would not have confidence with a biological brain coming up with a similar claim.