Yeah, I was guessing that the smiley faces wouldn’t be the best example… I was just wanting to draw something from the Eliezer/Bostrom universe since I had mentioned the paperclipper beforehand. So, maybe a better Eliezer-Bostrom example would be, we ask the AGI to “make us happy”, and it puts everyone paralyzed in hospital beds on dopamine drips. It’s not hard to think that after a couple hours of a good high, this would actually be a hellish existence, since human happiness is way more complex than the amount of dopamine in one’s brain (but of course, Genie in the Lamp, Mida’s Touch, etc)
So, don’t you equate this kind of scenario with a significant amount of suffering? Again, forget the bad example of the smiley faces, and reconsider. (I’ve actually read in a popular lesswrong post about s-risks Paul clearly saying that the risk of s-risk was 1/100th of the risk of x-risk (which makes for even less than 1/100th overall). Isn’t that extremely naive, considering the whole Genie in the Lamp paradigm? How can we be so sure that the Genie will only create hell 1 time for each 100 times it creates extinction?)
a) I agree that a suffering-maximizer is quite unlikely. But you don’t necessarily need one to create s-risks scenarios. You just need a Genie in the Lamp scenario. Like the dopamine drip example, in which the AGI isn’t trying to maximize suffering, quite on contrary, but since it’s super-smart in Sciences but lacks human common sense (a Genie), it ends up doing it.
b) Yes I had read that article before. While it presents some fair solutions, I think it’s far from being mostly solved. “Since hyperexistential catastrophes are narrow special cases (or at least it seems this way and we sure hope so), we can avoid them much more widely than ordinary existential risks.” Note the “at least it seems this way and we surely hope so”. Plus, what’s the odds that the first AGI will be created by someone who listens to what Eliezer has to say? Not that bad actually, if you consider US companies, but if you consider China, then dear God...
On your PS1, yeah definitely not willing to do cryonics, and again, s-risks don’t need to come from threats, just misalignment.
Sorry if I black pilled you with this, maybe there is no point… Maybe I’m wrong. I hope I am.
we ask the AGI to “make us happy”, and it puts everyone paralyzed in hospital beds on dopamine drips. It’s not hard to think that after a couple hours of a good high, this would actually be a hellish existence, since human happiness is way more complex than the amount of dopamine in one’s brain (but of course, Genie in the Lamp, Mida’s Touch, etc)
This sounds much better than extinction to me! Values might be complex, yeah, but if the AI is actually programmed to maximise human happiness then I expect the high wouldn’t wear off. Being turned into a wirehead arguably kills you, but it’s a much better experience than death for the wirehead!
(I’ve actually read in a popular lesswrong post about s-risks Paul clearly saying that the risk of s-risk was 1/100th of the risk of x-risk (which makes for even less than 1/100th overall). Isn’t that extremely naive, considering the whole Genie in the Lamp paradigm? How can we be so sure that the Genie will only create hell 1 time for each 100 times it creates extinction?)
I think the kind of Bostromian scenario you’re imagining is a slightly different line of AI concern than the types that Paul & the soft takeoff crowd are concerned about. The whole genie in the lamp thing, to me at least, doesn’t seem likely to create suffering. If this hypothetical AI values humans being alive & nothing more than that, it might separate your brain in half so that it counts as 2 humans being happy, for example. I think most scenarios where you’ve got a boundless optimiser superintelligence would lead to the creation of new minds that would perfectly satisfy its utility function.
“This sounds much better than extinction to me! Values might be complex, yeah, but if the AI is actually programmed to maximise human happiness then I expect the high wouldn’t wear off. Being turned into a wirehead arguably kills you, but it’s a much better experience than death for the wirehead!”
You keep dodging the point lol… As someone with some experience with drugs, I can tell you that it’s not fun. Human happiness is way subjective and doesn’t depend on a single chemical. For instance, some people love MDMA, others (like me) find it a too intense, too chemical, too fabricated happiness. A forced lifetime on MDMA would be some of the worst tortures I can imagine. It would fry you up. But even a very controlled dopamine drip wouldn’t be good. But anyway, I know you’re probably trolling, so just consider good old-fashioned torture in a dark dungeon instead...
On Paul: yes, he’s wrong, that’s how.
″ I think most scenarios where you’ve got a boundless optimiser superintelligence would lead to the creation of new minds that would perfectly satisfy its utility function.”
True, except that, on that basis alone, you have no idea how that would happen and what would it imply for those new minds (and old ones), since you’re not a digital superintelligence.
Thanks for the attentious commentary.
Yeah, I was guessing that the smiley faces wouldn’t be the best example… I was just wanting to draw something from the Eliezer/Bostrom universe since I had mentioned the paperclipper beforehand. So, maybe a better Eliezer-Bostrom example would be, we ask the AGI to “make us happy”, and it puts everyone paralyzed in hospital beds on dopamine drips. It’s not hard to think that after a couple hours of a good high, this would actually be a hellish existence, since human happiness is way more complex than the amount of dopamine in one’s brain (but of course, Genie in the Lamp, Mida’s Touch, etc)
So, don’t you equate this kind of scenario with a significant amount of suffering? Again, forget the bad example of the smiley faces, and reconsider. (I’ve actually read in a popular lesswrong post about s-risks Paul clearly saying that the risk of s-risk was 1/100th of the risk of x-risk (which makes for even less than 1/100th overall). Isn’t that extremely naive, considering the whole Genie in the Lamp paradigm? How can we be so sure that the Genie will only create hell 1 time for each 100 times it creates extinction?)
a) I agree that a suffering-maximizer is quite unlikely. But you don’t necessarily need one to create s-risks scenarios. You just need a Genie in the Lamp scenario. Like the dopamine drip example, in which the AGI isn’t trying to maximize suffering, quite on contrary, but since it’s super-smart in Sciences but lacks human common sense (a Genie), it ends up doing it.
b) Yes I had read that article before. While it presents some fair solutions, I think it’s far from being mostly solved. “Since hyperexistential catastrophes are narrow special cases (or at least it seems this way and we sure hope so), we can avoid them much more widely than ordinary existential risks.” Note the “at least it seems this way and we surely hope so”. Plus, what’s the odds that the first AGI will be created by someone who listens to what Eliezer has to say? Not that bad actually, if you consider US companies, but if you consider China, then dear God...
On your PS1, yeah definitely not willing to do cryonics, and again, s-risks don’t need to come from threats, just misalignment.
Sorry if I black pilled you with this, maybe there is no point… Maybe I’m wrong. I hope I am.
This sounds much better than extinction to me! Values might be complex, yeah, but if the AI is actually programmed to maximise human happiness then I expect the high wouldn’t wear off. Being turned into a wirehead arguably kills you, but it’s a much better experience than death for the wirehead!
I think the kind of Bostromian scenario you’re imagining is a slightly different line of AI concern than the types that Paul & the soft takeoff crowd are concerned about. The whole genie in the lamp thing, to me at least, doesn’t seem likely to create suffering. If this hypothetical AI values humans being alive & nothing more than that, it might separate your brain in half so that it counts as 2 humans being happy, for example. I think most scenarios where you’ve got a boundless optimiser superintelligence would lead to the creation of new minds that would perfectly satisfy its utility function.
“This sounds much better than extinction to me! Values might be complex, yeah, but if the AI is actually programmed to maximise human happiness then I expect the high wouldn’t wear off. Being turned into a wirehead arguably kills you, but it’s a much better experience than death for the wirehead!”
You keep dodging the point lol… As someone with some experience with drugs, I can tell you that it’s not fun. Human happiness is way subjective and doesn’t depend on a single chemical. For instance, some people love MDMA, others (like me) find it a too intense, too chemical, too fabricated happiness. A forced lifetime on MDMA would be some of the worst tortures I can imagine. It would fry you up. But even a very controlled dopamine drip wouldn’t be good. But anyway, I know you’re probably trolling, so just consider good old-fashioned torture in a dark dungeon instead...
On Paul: yes, he’s wrong, that’s how.
″ I think most scenarios where you’ve got a boundless optimiser superintelligence would lead to the creation of new minds that would perfectly satisfy its utility function.”
True, except that, on that basis alone, you have no idea how that would happen and what would it imply for those new minds (and old ones), since you’re not a digital superintelligence.