General question: mechanism design generally must cope with unknown agent beliefs/values and thus use truth incentivizing mechanisms, etc.
Has there been much/any work on using ZK proof techniques to prove some of these properties to make coordination easier? (ie proof that I agent X is the result of some program P using at least some amount of compute—training on dataset D using optimizer O and utility function U, etc)
General question: mechanism design generally must cope with unknown agent beliefs/values and thus use truth incentivizing mechanisms, etc.
Has there been much/any work on using ZK proof techniques to prove some of these properties to make coordination easier? (ie proof that I agent X is the result of some program P using at least some amount of compute—training on dataset D using optimizer O and utility function U, etc)