I mean, I guess it could be… but on the other hand, if talking about how thing X may doom us all leads people to think “oh, X sounds sweet! Gonna work to build it faster!”, the fuck are we supposed to do, exactly? That’s Duck Season, Rabbit Season levels of reverse psychology. It’s not like not talking about thing X dooming us all makes people not build thing X; they still will, maybe just a bit slower, and since no one talked about the risk of doom, it’ll doom us with 100% certainty. Like, if this mechanic is indeed how it works and at no point the freaking out factor surpasses the enhancement one to actually produce some decent reaction, then we’re constitutionally unable to escape doom or even preserve a shred of dignity in the face of it.
if talking about how thing X may doom us all leads people to think “oh, X sounds sweet! Gonna work to build it faster!”, the fuck are we supposed to do, exactly?
That’s like the situation with Roko’s Basilisk. We didn’t find a good solution there either; everything seems to make things worse, including doing nothing.
EDIT:
I meant, just like the natural human reaction to hearing about basilisk is “cool, let’s tell everyone”, the same way the natural human reaction to hearing that AI could kill us all is “cool, let’s build it”.
I don’t really consider that a big deal as it’s indeed a very tiny subset of all possible AI futures, and IMO doesn’t make a lot of sense in the current form (Yud said the same if I don’t remember wrong, just that he’d avoid discussion of it to avoid someone actually making it work, which I’m happy to go along with).
But in this case it’s a much broader class of problems. If we’re going to make things better in any way, we need to communicate the problem. That could mean someone thinks instead that AI sounds cool and they want to contribute to it. If the resulting rate of improvement of capabilities outstrips the rate at which people then decide to do something to crack down on that, the problem was unsolvable to begin with. You may try to refine your communication strategy, but there is no way out of this in which you simply magically achieve it without talking about the problem. The only thing resembling it would be “downplay the AI’s ability and treat them as empty hype to get people discouraged from investing in them” and that’s such a transparent lie it wouldn’t stick a second. Many artists seem to be toeing this weird “AI is a threat to our jobs but is also laughably incompetent” line and it’s getting them absolutely nowhere.
I mean, I guess it could be… but on the other hand, if talking about how thing X may doom us all leads people to think “oh, X sounds sweet! Gonna work to build it faster!”, the fuck are we supposed to do, exactly? That’s Duck Season, Rabbit Season levels of reverse psychology. It’s not like not talking about thing X dooming us all makes people not build thing X; they still will, maybe just a bit slower, and since no one talked about the risk of doom, it’ll doom us with 100% certainty. Like, if this mechanic is indeed how it works and at no point the freaking out factor surpasses the enhancement one to actually produce some decent reaction, then we’re constitutionally unable to escape doom or even preserve a shred of dignity in the face of it.
That’s like the situation with Roko’s Basilisk. We didn’t find a good solution there either; everything seems to make things worse, including doing nothing.
EDIT:
I meant, just like the natural human reaction to hearing about basilisk is “cool, let’s tell everyone”, the same way the natural human reaction to hearing that AI could kill us all is “cool, let’s build it”.
I don’t really consider that a big deal as it’s indeed a very tiny subset of all possible AI futures, and IMO doesn’t make a lot of sense in the current form (Yud said the same if I don’t remember wrong, just that he’d avoid discussion of it to avoid someone actually making it work, which I’m happy to go along with).
But in this case it’s a much broader class of problems. If we’re going to make things better in any way, we need to communicate the problem. That could mean someone thinks instead that AI sounds cool and they want to contribute to it. If the resulting rate of improvement of capabilities outstrips the rate at which people then decide to do something to crack down on that, the problem was unsolvable to begin with. You may try to refine your communication strategy, but there is no way out of this in which you simply magically achieve it without talking about the problem. The only thing resembling it would be “downplay the AI’s ability and treat them as empty hype to get people discouraged from investing in them” and that’s such a transparent lie it wouldn’t stick a second. Many artists seem to be toeing this weird “AI is a threat to our jobs but is also laughably incompetent” line and it’s getting them absolutely nowhere.