In order to build trust and see if there exists information I don’t know I propose a test. A complementary tool to prediction markets.
Prediction markets are great for aggregating beliefs about known questions. But what about questions you don’t know to ask? What about detecting that someone has a frame you haven’t encountered? Here’s a privacy-preserving way to discover unknown unknowns without revealing what you know or learning what they know
People build AIs that represent the knowledge they have. They can be trained to not expose that knowledge, they would also be in environments that didn’t log the interactions they have with other AIs, what they would log is if the AI went out of activation distribution from it’s training set. This would imply some novel argument or possibility that you don’t think of.
You would have to be less sure of your ideas around certain subjects if there was something you were missing.
With mutual verification of the system before use. Both parties inspect the architecture. Neither can cheat without the other seeing.
I’m imagining humanity fracturing into a million or billion different galaxies depending upon their exact level of desire for interacting with AI. I think the human value of the unity of humanity would be lost.
I think we need to buffer people from having to interact with AI if they don’t want to. But I value having other humans around. So some thing in between everyone living in their perfect isolation and everyone being dragged kicking and screaming into the future is where I think we should aim.