Agreed! The current version of CSHW depends on fairly high-tesla fMRI, which is somewhat new. Possibly there will be ways to adapt the concept to EEG, although this will take pretty advanced modeling and a lot of validation.
The real answer though might be it’s only now that we’re starting to clearly see the limits of the functional localization paradigm of neuroscience, and the need for something like CSHW. I’m reminded of this paper: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268
Selen evidently got a huge amount of pushback from ‘old guard skeptics’ on her framework, and almost didn’t survive this professionally. So I might point to political/factional factors.
Hi Donald- author of opentheory.net here. Really appreciate the thoughtful comment. A few quick notes:
I definitely (and very strongly) do not “predict that agents that believe in open individualism will always cooperate in prisoners dilemmas”—as I said in the OP, “an open individualist who assumes computationalism is true (team bits) will have a hard time coordinating with an open individualist who assumes physicalism is true (team atoms) — they’re essentially running incompatible versions of OI and will compete for resources.” I would say OI implies certain Schelling points, but I don’t think an agent that believes in OI has to always cooperate (largely due to the ambiguity in what a ‘belief’ may be- there’s a lot of wiggle-room here. Best to look at the implementation).
I think the overall purpose of discussing these definitions of personal identity is first, dissolving confusion (and perhaps seeing how tangled up the ‘Closed Individualism’ cluster is); second, trying to decipher Schelling points for each theory of identity. We only get predictions indirectly from this latter factor; mostly this is a definitional exercise.