Two AIs, Peter and Paul, want to know if they can trust each other, so they create a Watchdog Daimon.
They each have read access to the source code of the watchdog, but not to the temporary runtime storage, except the part earmarked to hold the published results.
They then each grant the watchdog a token that gives the watchdog read access to their own source code and value set, in real time. The watchdog keeps an eye on this, looking for patterns it has listed as forbidden. If it finds them, it publishes a warning in its results. Otherwise the results read “Peter is currently law abiding. Paul is currently law abiding.”. If it finds it has been granted insufficient resources that month to parse Paul’s current source code, it flags that up, and either Peter and Paul will agree between them to increase how many resources they loan the watchdog, or Paul will have to stop spamming comment lines in an attempt to avoid the watchdog by inflating his source size, or Peter will stop trusting Paul.
Example 2:
Two AIs, Peter and Paul, want to know if they can trust each other, so they create a Watchdog Daimon.
They each have read access to the source code of the watchdog, but not to the temporary runtime storage, except the part earmarked to hold the published results.
They then each grant the watchdog a token that gives the watchdog read access to their own source code and value set, in real time. The watchdog keeps an eye on this, looking for patterns it has listed as forbidden. If it finds them, it publishes a warning in its results. Otherwise the results read “Peter is currently law abiding. Paul is currently law abiding.”. If it finds it has been granted insufficient resources that month to parse Paul’s current source code, it flags that up, and either Peter and Paul will agree between them to increase how many resources they loan the watchdog, or Paul will have to stop spamming comment lines in an attempt to avoid the watchdog by inflating his source size, or Peter will stop trusting Paul.