To me, this is absurd. There must be something other than readability that defines what a simulation is . Otherwise, I could point to any sufficiently complex object and say : “this is a simulation of you”. If given sufficient time, I could come up with a reading grid of inputs and outputs that would predict your behaviour accurately.
Scott Arronson’s paper “Why philosophers should care about complexity” has a chapter, Computationalism and Waterfalls, which very directly addresses this. Read that chunk for the full argument, but the conclusions is:
Suppose we want to claim, for example, that a computation that plays chess is “equivalent” to some other computation that simulates a waterfall. Then our claim is only non-vacuous if it’s possible to exhibit the equivalence (i.e., give the reductions) within a model of computation that isn’t itself powerful enough to solve the chess or waterfall problems.
Also, my model of your arg is “Saying consciousness is substrate independent creates all sorts of wacky results that don’t feel legit, therefor consciousness is no substrate independent.” Arronson’s argument seems to eliminate all of the unreasonable conclusion of substrate independence that you invoked.
Scott Arronson’s paper “Why philosophers should care about complexity” has a chapter, Computationalism and Waterfalls, which very directly addresses this. Read that chunk for the full argument, but the conclusions is:
Also, my model of your arg is “Saying consciousness is substrate independent creates all sorts of wacky results that don’t feel legit, therefor consciousness is no substrate independent.” Arronson’s argument seems to eliminate all of the unreasonable conclusion of substrate independence that you invoked.