Jessica Taylor. CS undergrad and Master’s at Stanford; former research fellow at MIRI.
I work on decision theory, social epistemology, strategy, naturalized agency, mathematical foundations, decentralized networking systems and applications, theory of mind, and functional programming languages.
Blog: unstableontology.com
Twitter: https://twitter.com/jessi_cata
I think if M isn’t “really mental”, like there is no world representation, it shouldn’t be included in M. I’m guessing depending on the method of encryption, keys might be checkable. If they are not checkable there’s a pigeonhole argument that almost all (short) keys would decrypt to noise. Idk if it’s possible to “encrypt two minds at once” intentionally with homomorphic encryption.
And yeah, if there isn’t a list of minds in R, then it’s hard for g to be efficiently computable, as it would be a search. That’s part of what makes homomorphically encrypted consciousness paradoxical, and what makes possibility C worth considering.
Regarding subjective existence of subjective states: I think if you codify subjective states then you can ask questions like “which subjective states believe other subjective states exist?”. Since it is a belief similar to other beliefs.