I consider the “We don’t understand X, and we don’t understand Y, so maybe Y explains X” school of explanation (which is mostly my takeaway of Penrose, but I’ll admit to only having read less than half of one of his books, so I may be doing him an injustice) more of an appeal to shared ignorance than an actual explanation.
I read his whole book and I think this is a large part of it. What I would throw in is “we know an awful lot about physics across all relevant size scales and energy scales likely to be seen in the human brain. So it seems pretty unlikely there is some entirely unknown thing in there that is causing consciousness (like radiation was new when Madame Curie got bit by it). So of the things we do know about it, are there an that could be candidates for consciousness? Is there any physics where the outcomes seem to not be completely determined by what we already know, where something like consciousness may have its thumb on the scale and we haven’t noticed it yet?”
But yes, primarily we have a hole in our understanding of quantum and we have a peg in what we don’t know about consciousness, is there ANY CHANCE this peg fits in that hole?
And when we get to the point where basing my predictions about a computer program’s behavior on the assumption that it’s conscious, or is a real person, or whatever, allows me to make reliable predictions about it, I’ll do the same thing for it as I do for other people.
We were at the point with machines where treating them in some senses like people tended to make them work better. People named their cars and their musical instruments and treated them like people. Psychologists labeled this as “personification” or some such thing, that doesn’t make it any less useful. I’m sure there are people who treat the automated voice systems of some companies they call in to as though they were talking to a person following a script. This would not be enough for me to think these voice systems were people.
Similarly, humanity has spent eons where significant parts of humanity did not treat other significant parts of humanity as though they were people. Black slaves in Europe and the United States come to mind. But there have been slaves for likely as long as there have been humans. The fact that one bunch of humans treated this other bunch of humans as though they were chattel does not make me believe that they were not humans.
I can make reliable predictions about people around me while remaining agnostic about whether they are conscious or not. Most of the predictions I make would not change if I changed my opinion about whether they were conscious or not.
The idea that how brains manifest consciousness requires a new understanding of physics to explain seems as implausible to me as the idea that how brains manifest the Chinese language does.
You seem to be treating “assuming X allows me to make reliable predictions” and “some people behave as though X were true” as equivalent assertions. I agree with you that some people behave as though automated voice systems were people, but I don’t believe that assumption helps them make more reliable predictions than they otherwise could.
I continue to think that when assuming a computer program is conscious allows me to make reliable predictions about it (or, to be more precise, allows me to make more reliable predictions than assuming the opposite would), I’ll do so, and discussions of how computer programs don’t have various attributes that brains have which must therefore explain why brains are conscious and computer programs aren’t will just seem absurd.
I read his whole book and I think this is a large part of it. What I would throw in is “we know an awful lot about physics across all relevant size scales and energy scales likely to be seen in the human brain. So it seems pretty unlikely there is some entirely unknown thing in there that is causing consciousness (like radiation was new when Madame Curie got bit by it). So of the things we do know about it, are there an that could be candidates for consciousness? Is there any physics where the outcomes seem to not be completely determined by what we already know, where something like consciousness may have its thumb on the scale and we haven’t noticed it yet?”
But yes, primarily we have a hole in our understanding of quantum and we have a peg in what we don’t know about consciousness, is there ANY CHANCE this peg fits in that hole?
We were at the point with machines where treating them in some senses like people tended to make them work better. People named their cars and their musical instruments and treated them like people. Psychologists labeled this as “personification” or some such thing, that doesn’t make it any less useful. I’m sure there are people who treat the automated voice systems of some companies they call in to as though they were talking to a person following a script. This would not be enough for me to think these voice systems were people.
Similarly, humanity has spent eons where significant parts of humanity did not treat other significant parts of humanity as though they were people. Black slaves in Europe and the United States come to mind. But there have been slaves for likely as long as there have been humans. The fact that one bunch of humans treated this other bunch of humans as though they were chattel does not make me believe that they were not humans.
I can make reliable predictions about people around me while remaining agnostic about whether they are conscious or not. Most of the predictions I make would not change if I changed my opinion about whether they were conscious or not.
The idea that how brains manifest consciousness requires a new understanding of physics to explain seems as implausible to me as the idea that how brains manifest the Chinese language does.
You seem to be treating “assuming X allows me to make reliable predictions” and “some people behave as though X were true” as equivalent assertions.
I agree with you that some people behave as though automated voice systems were people, but I don’t believe that assumption helps them make more reliable predictions than they otherwise could.
I continue to think that when assuming a computer program is conscious allows me to make reliable predictions about it (or, to be more precise, allows me to make more reliable predictions than assuming the opposite would), I’ll do so, and discussions of how computer programs don’t have various attributes that brains have which must therefore explain why brains are conscious and computer programs aren’t will just seem absurd.