A reductio ad absurdum for naive Functional/​Computational Theory-of-Mind (FCToM).

Thesis: FCToM is underspecified, since they rely on a mapping (which I call the “level of analysis”) from physical systems to mathematic functions or computational algorithms (respectively), but do not specify that mapping. (I’m collapsing functional and computational theories of mind, since I think the distinction isn’t relevant here.

I believe this issue has great ethical significance: if we accept a naive version of FCToM, we may end up using a misleading level of analysis, and (e.g.) committing massive mind crime. One form of naive FCToM would ignore this issue and say: “if two systems can be described as performing the same computations, then they have the same ‘mind’ (and hence the same consciousness and same status as moral patients)”

The reductio ad absurdum: Imagine a future totalitarian society where individual humans are forced to play the role of logic gates in a computer which hosts an emulation of your brain. They communicate via snail-mail, and severe punishment, social isolation, and redundancies are used to ensure that they perform their task faithfully. If you endorse naive FCToM, you would say “that’s just me!” But far more ethically relevant than the emulation is the experience of the many people enslaved in this system. Note: this is a thought experiment, and may not be physically realizable (for instance, the people playing the gates may be too difficult to control); I think exploring that issue can provide a complementary critique of FCToM, but I’ll skip it for now.

Historical note: the idea for writing this post, although not the content, is somewhat inspired by a debate between Massimo Pigliucci and Eliezer Yudkowsky on blogging heads (EDIT: e.g., jumping in the middle here: https://​​www.youtube.com/​​watch?v=onvAl4SQ5-Q&t=2118s ). I think Massimo won that argument.