How sure are you that brain emulations would be conscious?

Or the con­verse prob­lem—an agent that con­tains all the as­pects of hu­man value, ex­cept the val­u­a­tion of sub­jec­tive ex­pe­rience. So that the re­sult is a non­sen­tient op­ti­mizer that goes around mak­ing gen­uine dis­cov­er­ies, but the dis­cov­er­ies are not sa­vored and en­joyed, be­cause there is no one there to do so. This, I ad­mit, I don’t quite know to be pos­si­ble. Con­scious­ness does still con­fuse me to some ex­tent. But a uni­verse with no one to bear wit­ness to it, might as well not be.

- Eliezer Yud­kowsky, “Value is Frag­ile”

I had meant to try to write a long post for LessWrong on con­scious­ness, but I’m get­ting stuck on it, partly be­cause I’m not sure how well I know my au­di­ence here. So in­stead, I’m writ­ing a short post, with my main pur­pose be­ing just to in­for­mally poll the LessWrong com­mu­nity on one ques­tion: how sure are you that whole brain em­u­la­tions would be con­scious?

There’s ac­tu­ally a fair amount of philo­soph­i­cal liter­a­ture about is­sues in this vicinity; David Chalmers’ pa­per “The Sin­gu­lar­ity: A Philo­soph­i­cal Anal­y­sis” has a good in­tro­duc­tion to the de­bate in sec­tion 9, in­clud­ing some rele­vant ter­minol­ogy:

Biolog­i­cal the­o­rists of con­scious­ness hold that con­scious­ness is es­sen­tially biolog­i­cal and that no non­biolog­i­cal sys­tem can be con­scious. Func­tion­al­ist the­o­rists of con­scious­ness hold that what mat­ters to con­scious­ness is not biolog­i­cal makeup but causal struc­ture and causal role, so that a non­biolog­i­cal sys­tem can be con­scious as long as it is or­ga­nized cor­rectly.

So, on the func­tion­al­ist view, em­u­la­tions would be con­scious, while on the biolog­i­cal view, they would not be.

Per­son­ally, I think there are good ar­gu­ments for the func­tion­al­ist view, and the biolog­i­cal view seems prob­le­matic: “biolog­i­cal” is a fuzzy, high-level cat­e­gory that doesn’t seem like it could be of any fun­da­men­tal im­por­tance. So prob­a­bly em­u­la­tions will be con­scious—but I’m not too sure of that. Con­scious­ness con­fuses me a great deal, and seems to con­fuse other peo­ple a great deal, and be­cause of that I’d cau­tion against be­ing too sure of much of any­thing about con­scious­ness. I’m wor­ried not so much that the biolog­i­cal view will turn out to be right, but that the truth might be some third op­tion no one has thought of, which might or might not en­tail em­u­la­tions are con­scious.

Uncer­tainty about whether em­u­la­tions would be con­scious is po­ten­tially of great prac­ti­cal con­cern. I don’t think it’s much of an ar­gu­ment against up­load­ing-as-life-ex­ten­sion; bet­ter to prob­a­bly sur­vive as an up than do noth­ing and die for sure. But it’s wor­ri­some if you think about the pos­si­bil­ity, say, of an in­tended-to-be-Friendly AI de­cid­ing we’d all be bet­ter off if we were forcibly up­loaded (or per­suaded, us­ing its su­per­hu­man in­tel­li­gence, to “vol­un­tar­ily” up­load...) Uncer­tainty about whether em­u­la­tions would be con­scious also makes Robin Han­son’s “em rev­olu­tion” sce­nario less ap­peal­ing.

For a long time, I’ve vaguely hoped that ad­vances in neu­ro­science and cog­ni­tive sci­ence would lead to un­rav­el­ing the prob­lem of con­scious­ness. Per­haps work­ing on cre­at­ing the first em­u­la­tions would do the trick. But this is only a vague hope, I have no clear idea of how that could pos­si­bly hap­pen. Another hope would be that if we can get all the other prob­lems in Friendly AI right, we’ll be able to trust the AI to solve con­scious­ness for us. But with our pre­sent un­der­stand­ing of con­scious­ness, can we re­ally be sure that would be the case?

That leads me to my sec­ond ques­tion for the LessWrong com­mu­nity: is there any­thing we can do now to to get clearer on con­scious­ness? Any way to hack away at the edges?