Have LW crowd ever adjusted for one thing that is common (I suppose) to majority of most active and established doomers here and elsewhere that makes their opinions so uniform—that is—they are all got successful and and important people, who achieved high fulfilment and (although not major factor) capital and wealth in this here present life of theirs? They all got a big lot of what to loose if perturbations happen. Never saw anything about this peculiar issue here on LW. Aren’t they all just scared to descend to the level of less fortunate majority and if that might be the only true reason for them being doomers? Oh this is so stupid, if it’s so—there will be no answer, only selective amnesia. Like Yudkovsky—who is he if AI is not going to kill its parents? In that case he’s nobody. No chance he’s even able to consider this—he’s life is a bet on him being somebody.
Does not sound plausible to me. If all worries about AI somehow magically disappeared overnight (God descends from Heaven and provides a 100% credible mathematical proof that any superhuman AI will necessarily be good), Yudkowsky would still be the guy who wrote the Sequences, founded the rationalist community, created a website where the quality of discourse is visibly higher than on the rest of the internet, etc. With the threat of AI out of the way, the rationalist community would probably focus again on developing the art of human rationality, increasing the sanity waterline, etc.
Also, your argument could be used to dismiss anything. Doctors talking about cancer? They just worry that if people are no longer afraid of diseases, no one will treat the doctors as high-status anymore. Etc.
I don’t really agree or like your comment very much, but I think buried underneath the negativity there is something valuable about making sure not to be too personally invested in what you believe about AI. You should be able to change your mind without that affecting your public reputation or whatever. It is possible some people have found meaning in the AI alignment mission, and if it turned out that mission was counterproductive, it may be hard for people to accept that.
nah I really wanna chill the fuck out, I’ve sacrificed a really high ratio of good things in my life to prevent the bad future, and I legitimately think even very shitty, ground-down-by-society lives in no-tech or oppressed areas are going to be worsened and then extinguished by AI on the default trajectory. I get why you’d wonder this and don’t think you’re a fool for thinking it, slightly irritated at your rudeness even though I get why you’d be rude in a comment like this. but it just doesn’t seem true to me. I turn down all sorts of opportunities to use my knowledge of AI to make a bunch of money fast, eg have nice job, or start another company. (I sold out of my last one at the minimum stock price the cofounder would accept when I got spooked by the idea of having it on my balance sheet, though in retrospect that was probably not a very productive thing to do in order to influence the world for the better.)
Yudkowsky has said similar things, someone donated enough money to him that he’s set for life, he has no financial incentive to push a worldview now, he’s literally just losing time he could spend on doing something more fun, and he does spend time on the more-fun things anyway.
It’s not a fun hobby. If you could give me real evidence that things are action-unconditionally fine, that we don’t have to work really hard to achieve even the P(doom) people already anticipate, then I’d be very excited to chill out and have a better paying job.
That said, yes, agree that the majority of your reasoning about how to make the world better should focus on people who are not in his situation, and that it’s slightly cringe to be worrying about really well-off people losing their wealth rather than how to make the world better for poor people and sick people and so on.
But mostly I expect that evolution does not favor nice things very hard, that it takes a while for it to get around to coughing up nice things, and that if you speed up memetic evolution a lot it mostly lets aggressive replicators win for a while until cooperation groups knit back together. seems like we—all humans and all current AIs—are ripe to be beaten by an aggressive replicator unless we get our act together hard about figuring out how to make durably defensible cooperative interactions, much more durable than has ever been achieved before.
Else your house (or datacenter) eventually gets paved over by a self-replicating factory, and before that, your mind gets paved over by a manipulative memeplex. fair to worry that any particular memeplex (including the alignment memeplex) might be doing this, though, there is very much a history of memeplexes saying “we’re the real cooperative memeplex” and turning out to have either lied or simply been wrong.
Have LW crowd ever adjusted for one thing that is common (I suppose) to majority of most active and established doomers here and elsewhere that makes their opinions so uniform—that is—they are all got successful and and important people, who achieved high fulfilment and (although not major factor) capital and wealth in this here present life of theirs? They all got a big lot of what to loose if perturbations happen. Never saw anything about this peculiar issue here on LW. Aren’t they all just scared to descend to the level of less fortunate majority and if that might be the only true reason for them being doomers? Oh this is so stupid, if it’s so—there will be no answer, only selective amnesia. Like Yudkovsky—who is he if AI is not going to kill its parents? In that case he’s nobody. No chance he’s even able to consider this—he’s life is a bet on him being somebody.
Does not sound plausible to me. If all worries about AI somehow magically disappeared overnight (God descends from Heaven and provides a 100% credible mathematical proof that any superhuman AI will necessarily be good), Yudkowsky would still be the guy who wrote the Sequences, founded the rationalist community, created a website where the quality of discourse is visibly higher than on the rest of the internet, etc. With the threat of AI out of the way, the rationalist community would probably focus again on developing the art of human rationality, increasing the sanity waterline, etc.
Also, your argument could be used to dismiss anything. Doctors talking about cancer? They just worry that if people are no longer afraid of diseases, no one will treat the doctors as high-status anymore. Etc.
I don’t really agree or like your comment very much, but I think buried underneath the negativity there is something valuable about making sure not to be too personally invested in what you believe about AI. You should be able to change your mind without that affecting your public reputation or whatever. It is possible some people have found meaning in the AI alignment mission, and if it turned out that mission was counterproductive, it may be hard for people to accept that.
nah I really wanna chill the fuck out, I’ve sacrificed a really high ratio of good things in my life to prevent the bad future, and I legitimately think even very shitty, ground-down-by-society lives in no-tech or oppressed areas are going to be worsened and then extinguished by AI on the default trajectory. I get why you’d wonder this and don’t think you’re a fool for thinking it, slightly irritated at your rudeness even though I get why you’d be rude in a comment like this. but it just doesn’t seem true to me. I turn down all sorts of opportunities to use my knowledge of AI to make a bunch of money fast, eg have nice job, or start another company. (I sold out of my last one at the minimum stock price the cofounder would accept when I got spooked by the idea of having it on my balance sheet, though in retrospect that was probably not a very productive thing to do in order to influence the world for the better.)
Yudkowsky has said similar things, someone donated enough money to him that he’s set for life, he has no financial incentive to push a worldview now, he’s literally just losing time he could spend on doing something more fun, and he does spend time on the more-fun things anyway.
It’s not a fun hobby. If you could give me real evidence that things are action-unconditionally fine, that we don’t have to work really hard to achieve even the P(doom) people already anticipate, then I’d be very excited to chill out and have a better paying job.
That said, yes, agree that the majority of your reasoning about how to make the world better should focus on people who are not in his situation, and that it’s slightly cringe to be worrying about really well-off people losing their wealth rather than how to make the world better for poor people and sick people and so on.
But mostly I expect that evolution does not favor nice things very hard, that it takes a while for it to get around to coughing up nice things, and that if you speed up memetic evolution a lot it mostly lets aggressive replicators win for a while until cooperation groups knit back together. seems like we—all humans and all current AIs—are ripe to be beaten by an aggressive replicator unless we get our act together hard about figuring out how to make durably defensible cooperative interactions, much more durable than has ever been achieved before.
Else your house (or datacenter) eventually gets paved over by a self-replicating factory, and before that, your mind gets paved over by a manipulative memeplex. fair to worry that any particular memeplex (including the alignment memeplex) might be doing this, though, there is very much a history of memeplexes saying “we’re the real cooperative memeplex” and turning out to have either lied or simply been wrong.