Its not an explicit form of Primacy of Consciousness like prayer or wishing. Its implicit in QM and its basic premises. One example of an implicit form of PoC is to project properties or aspects of consciousness onto reality and treating them as metaphysical and not epistemological factors. I think the ancient philosophers got hung up on this when debating whether a color like “red” was in the object or subject. This went round and round for a few hundred years until someone pointed out that its both (form/object distinction).
Jaynes covers similar idea in his book and articles where he ascribes this error to traditional frequentists who hold probabilities as a property of things (a metaphysical concept) instead of a measure or property of our lack of knowledge (an epistemological, bayesian concept). Moreover, committing the PoC error will lead you to supernaturalism eventually so MWI is just a logical outcome of that error.
Am I missing something here? EY and SA were discussing the advance of computer technology, the end of Moore’s rule-of-thumb, quantum computing, BIg Blue, etc. It seems to me that AI is an epistemological problem not an issue of more computing power. Getting Big Blue to go down all the possible branches is not really intelligence at all. Don’t we need a theory of knowledge first? I’m new here so this has probably already been discussed but what about freewill? How do AI researchers address that issue?
I’m with SA on the MWI of QM. I think EY is throwing the scientific baby out with the physics bath water. It seems to me that the MWI is committing the mind projection fallacy or the fallacy of the primacy of consciousness. I also agree with whoever said (paraphrased) that all these interpretations of QM just differ on where they hide the contradictions… they are all unsatisfactory and it will take a genius to figure it out.