While I think reference problems do defeat specific arguments a computational-functionalist might want to make, I think my simulated upload’s references can be reoriented with only a little work. I do not yet see the argument for why highly capable self-preservation should take particularly long for AIs to develop.
I think you’re spot on with this. If you gave an AI system signals tied to e.g. CPU temperature, battery health etc… and train it with objectives that make those variables matter it will “care” about them in the same causal-role functional sense as the sim cares about simulated temperature.
This is a consequence of teleosemantics (which I can see is a topic you’ve written a lot about!)
I think you’re spot on with this. If you gave an AI system signals tied to e.g. CPU temperature, battery health etc… and train it with objectives that make those variables matter it will “care” about them in the same causal-role functional sense as the sim cares about simulated temperature.
This is a consequence of teleosemantics (which I can see is a topic you’ve written a lot about!)