Or consider the idea that idealization involves or is approximated by “running a large number of copies of yourself, who then talk/argue a lot with each other and with others, […]”
Later in the “Ghost civilizations” section you mentioned the idea of ghost copies “supervising/supporting/scrutinizing an explorer trying some sort of process or stimulus that could lead to going off the rails”. It’s interesting to think about technologies like lie-detectors in this context, for mitigating risks like the “memetic hazards that are fatal from an evaluative perspective” that you mentioned. For example, suppose that a Supervisor Copy asks many Explorer Copies to enter a secure room that is then locked. The Explorer Copies then pursue a certain risky line of thought X. They then get to write down their conclusion, but the Supervisor Copy only gets to read it if all the Explorer Copies pass a lie-detector test in which they claim that they did not stumble upon any “memetic hazard” etc.
As an aside, all those copies can be part of a single simulation that we run for this purpose, in which they all get treated very well (even if they end up without the ability to affect anything outside the simulation).
Related to what you wrote near the end (“In a sense, I can use the image of them…”), I just want to add that using an imaginary idealized version of oneself as an advisor may be a great way to mitigate some harmful cognitive biases and also just a great productivity trick.
Later in the “Ghost civilizations” section you mentioned the idea of ghost copies “supervising/supporting/scrutinizing an explorer trying some sort of process or stimulus that could lead to going off the rails”. It’s interesting to think about technologies like lie-detectors in this context, for mitigating risks like the “memetic hazards that are fatal from an evaluative perspective” that you mentioned. For example, suppose that a Supervisor Copy asks many Explorer Copies to enter a secure room that is then locked. The Explorer Copies then pursue a certain risky line of thought X. They then get to write down their conclusion, but the Supervisor Copy only gets to read it if all the Explorer Copies pass a lie-detector test in which they claim that they did not stumble upon any “memetic hazard” etc.
As an aside, all those copies can be part of a single simulation that we run for this purpose, in which they all get treated very well (even if they end up without the ability to affect anything outside the simulation).
Related to what you wrote near the end (“In a sense, I can use the image of them…”), I just want to add that using an imaginary idealized version of oneself as an advisor may be a great way to mitigate some harmful cognitive biases and also just a great productivity trick.