nah I really wanna chill the fuck out, I’ve sacrificed a really high ratio of good things in my life to prevent the bad future, and I legitimately think even very shitty, ground-down-by-society lives in no-tech or oppressed areas are going to be worsened and then extinguished by AI on the default trajectory. I get why you’d wonder this and don’t think you’re a fool for thinking it, slightly irritated at your rudeness even though I get why you’d be rude in a comment like this. but it just doesn’t seem true to me. I turn down all sorts of opportunities to use my knowledge of AI to make a bunch of money fast, eg have nice job, or start another company. (I sold out of my last one at the minimum stock price the cofounder would accept when I got spooked by the idea of having it on my balance sheet, though in retrospect that was probably not a very productive thing to do in order to influence the world for the better.)
Yudkowsky has said similar things, someone donated enough money to him that he’s set for life, he has no financial incentive to push a worldview now, he’s literally just losing time he could spend on doing something more fun, and he does spend time on the more-fun things anyway.
It’s not a fun hobby. If you could give me real evidence that things are action-unconditionally fine, that we don’t have to work really hard to achieve even the P(doom) people already anticipate, then I’d be very excited to chill out and have a better paying job.
That said, yes, agree that the majority of your reasoning about how to make the world better should focus on people who are not in his situation, and that it’s slightly cringe to be worrying about really well-off people losing their wealth rather than how to make the world better for poor people and sick people and so on.
But mostly I expect that evolution does not favor nice things very hard, that it takes a while for it to get around to coughing up nice things, and that if you speed up memetic evolution a lot it mostly lets aggressive replicators win for a while until cooperation groups knit back together. seems like we—all humans and all current AIs—are ripe to be beaten by an aggressive replicator unless we get our act together hard about figuring out how to make durably defensible cooperative interactions, much more durable than has ever been achieved before.
Else your house (or datacenter) eventually gets paved over by a self-replicating factory, and before that, your mind gets paved over by a manipulative memeplex. fair to worry that any particular memeplex (including the alignment memeplex) might be doing this, though, there is very much a history of memeplexes saying “we’re the real cooperative memeplex” and turning out to have either lied or simply been wrong.
nah I really wanna chill the fuck out, I’ve sacrificed a really high ratio of good things in my life to prevent the bad future, and I legitimately think even very shitty, ground-down-by-society lives in no-tech or oppressed areas are going to be worsened and then extinguished by AI on the default trajectory. I get why you’d wonder this and don’t think you’re a fool for thinking it, slightly irritated at your rudeness even though I get why you’d be rude in a comment like this. but it just doesn’t seem true to me. I turn down all sorts of opportunities to use my knowledge of AI to make a bunch of money fast, eg have nice job, or start another company. (I sold out of my last one at the minimum stock price the cofounder would accept when I got spooked by the idea of having it on my balance sheet, though in retrospect that was probably not a very productive thing to do in order to influence the world for the better.)
Yudkowsky has said similar things, someone donated enough money to him that he’s set for life, he has no financial incentive to push a worldview now, he’s literally just losing time he could spend on doing something more fun, and he does spend time on the more-fun things anyway.
It’s not a fun hobby. If you could give me real evidence that things are action-unconditionally fine, that we don’t have to work really hard to achieve even the P(doom) people already anticipate, then I’d be very excited to chill out and have a better paying job.
That said, yes, agree that the majority of your reasoning about how to make the world better should focus on people who are not in his situation, and that it’s slightly cringe to be worrying about really well-off people losing their wealth rather than how to make the world better for poor people and sick people and so on.
But mostly I expect that evolution does not favor nice things very hard, that it takes a while for it to get around to coughing up nice things, and that if you speed up memetic evolution a lot it mostly lets aggressive replicators win for a while until cooperation groups knit back together. seems like we—all humans and all current AIs—are ripe to be beaten by an aggressive replicator unless we get our act together hard about figuring out how to make durably defensible cooperative interactions, much more durable than has ever been achieved before.
Else your house (or datacenter) eventually gets paved over by a self-replicating factory, and before that, your mind gets paved over by a manipulative memeplex. fair to worry that any particular memeplex (including the alignment memeplex) might be doing this, though, there is very much a history of memeplexes saying “we’re the real cooperative memeplex” and turning out to have either lied or simply been wrong.