Now imagine a sealed box that behaves exactly like a human, dutifully saying things like “I’m conscious”, “I experience red” and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?
I think you’re doing some priming here by adding “dutifully”.
I believe that causally the output “I see red” is connected to the actual experience of seeing red; while it’s possible (depending on the level of optimization) that the optimized upload is saying “icey led” with an accent to sound like the expected output it still seems more plausible that the brain structure generating the red experience is preserved (maintaining a causal representation of reality is generally more optimal/compressed than maintaining a lookup table)
You wouldn’t think that a book or Eliza program saying “I see red” were conscious, right? The question is whether optimizing an upload can make it close to an Eliza program for some topics. I think it’s possible, given how little we can say about consciousness (i.e. how few different responses we’d need to code into the Eliza program).
Not disagreeing in principle, it depends on the degree of optimization and the set of data you expect the upload to have low error on. Eliza will succeed on a very small set of data but will fail quickly on anything close to real-life. It’s possible that there’s a more compact representation that results in “I see red” than the DAG with consciousness in it, but I don’t think it’s that easy to optimize out without breaking other tests.
BTW you’ve read Blindsight right? Great scifi on this topic basically (with aliens instead of uploads)
I think you’re doing some priming here by adding “dutifully”.
I believe that causally the output “I see red” is connected to the actual experience of seeing red; while it’s possible (depending on the level of optimization) that the optimized upload is saying “icey led” with an accent to sound like the expected output it still seems more plausible that the brain structure generating the red experience is preserved (maintaining a causal representation of reality is generally more optimal/compressed than maintaining a lookup table)
You wouldn’t think that a book or Eliza program saying “I see red” were conscious, right? The question is whether optimizing an upload can make it close to an Eliza program for some topics. I think it’s possible, given how little we can say about consciousness (i.e. how few different responses we’d need to code into the Eliza program).
Not disagreeing in principle, it depends on the degree of optimization and the set of data you expect the upload to have low error on. Eliza will succeed on a very small set of data but will fail quickly on anything close to real-life. It’s possible that there’s a more compact representation that results in “I see red” than the DAG with consciousness in it, but I don’t think it’s that easy to optimize out without breaking other tests. BTW you’ve read Blindsight right? Great scifi on this topic basically (with aliens instead of uploads)