So, why can’t I go give that talk to DeepMind right now?
A third (disconcerting) possibility is that the list of demands amounts to saying “don’t ever build AGIs”, because the global workspace / self-awareness / whatever is really the only practical way to build AGI. (I happen to put a lot of weight on that possibility, but it’s controversial and non-obvious.) If that possibility is true, then, well, I guess in principle DeepMind could still follow that list of demands, but it amounts to them giving up on their corporate mission, and even if they did, it would be very difficult to get every other actor to do the same thing forever.
If you do reject one or more of these assumptions, I would be curious to hear which ones, and why—and, in light of your different assumptions, how you think we should formulate the major question(s) about AI sentience, and about the relationship between sentience and moral patienthood.
(Warning: haven’t read or thought very much about this.) I guess I’m currently (weakly) leaning towards strong illusionism. But I think I can still care about things-computationally-similar-to-humans. I don’t know, at the end of the day, I care about what I care about. See last section here, and more in this comment.
More precisely, I’m hopeful (and hoping!) that one can soften the “we are rejecting strong illusionism” claim in #3 without everything else falling apart.
A third (disconcerting) possibility is that the list of demands amounts to saying “don’t ever build AGIs”
That would indeed be disconcerting. I would hope that, in this world, it’s possible and profitable to have AGIs that are sentient, but which don’t suffer in quite the same way / as badly as humans and animals do. It would be nice—but is by no means guaranteed—if the really bad mental states we can get are in a kinda arbitrary and non-natural point in mind-space. This is all very hard to think about though, and I’m not sure what I think.
I’m hopeful (and hoping!) that one can soften the “we are rejecting strong illusionism” claim in #3 without everything else falling apart.
I hope so too. I was more optimistic about that until I read Kammerer’s paper, then I found myself getting worried. I need to understand that paper more deeply and figure out what I think. Fortunately, I think one thing that Kammerer worries about is that, on illusionism (or even just good old fashioned materialism), “moral patienthood” will have vague boundaries. I’m not as worried about that, and I’m guessing you aren’t either. So maybe if we’re fine with fuzzy boundaries around moral patienthood, things aren’t so bad.
But I think there’s other more worrying stuff in that paper—I should write up a summary some time soon!
A third (disconcerting) possibility is that the list of demands amounts to saying “don’t ever build AGIs”, because the global workspace / self-awareness / whatever is really the only practical way to build AGI. (I happen to put a lot of weight on that possibility, but it’s controversial and non-obvious.) If that possibility is true, then, well, I guess in principle DeepMind could still follow that list of demands, but it amounts to them giving up on their corporate mission, and even if they did, it would be very difficult to get every other actor to do the same thing forever.
(Warning: haven’t read or thought very much about this.) I guess I’m currently (weakly) leaning towards strong illusionism. But I think I can still care about things-computationally-similar-to-humans. I don’t know, at the end of the day, I care about what I care about. See last section here, and more in this comment.
More precisely, I’m hopeful (and hoping!) that one can soften the “we are rejecting strong illusionism” claim in #3 without everything else falling apart.
That would indeed be disconcerting. I would hope that, in this world, it’s possible and profitable to have AGIs that are sentient, but which don’t suffer in quite the same way / as badly as humans and animals do. It would be nice—but is by no means guaranteed—if the really bad mental states we can get are in a kinda arbitrary and non-natural point in mind-space. This is all very hard to think about though, and I’m not sure what I think.
I hope so too. I was more optimistic about that until I read Kammerer’s paper, then I found myself getting worried. I need to understand that paper more deeply and figure out what I think. Fortunately, I think one thing that Kammerer worries about is that, on illusionism (or even just good old fashioned materialism), “moral patienthood” will have vague boundaries. I’m not as worried about that, and I’m guessing you aren’t either. So maybe if we’re fine with fuzzy boundaries around moral patienthood, things aren’t so bad.
But I think there’s other more worrying stuff in that paper—I should write up a summary some time soon!