Has there been any work on representation under extreme information asymmetry?
I’m thinking like having AIs trained to make the same decisions as you would and them being given the secret or info-hazardous material to make governance decisions on your behalf. To avoid info leakage.
Has there been any work on representation under extreme information asymmetry?
I’m thinking like having AIs trained to make the same decisions as you would and them being given the secret or info-hazardous material to make governance decisions on your behalf. To avoid info leakage.