Relatedly, I wonder if I should be “following the money” more when thinking about AI risk. In particular, instead of saying that “AI researchers/companies” will disempower humanity, maybe it would be appropriate to instead or additionally say “(AI )capitalists and capital and capitalism”. My current guess is that while it is appropriate to place a bunch of blame on these, it’s also true that e.g. Soviet or Chinese systems [wouldn’t be]/aren’t doing better, so I’ve mostly avoided saying this so far. That said, my guess is that if the world were much more like Europe, we would be dying with significantly more dignity, in part due to Europe getting some hyperparameters of governance+society+culture+life more right due to blind luck, but also actually in part due to getting some hyperparameters right because of good reasoning that was basically tracking something logically connected to AI risk (though so far not significantly explicitly tracking AI risk), e.g. via humanism. Another example of a case where I wonder if I should follow the money more is: to what extent should I think of Constellation being wrong/confused/thoughtless/slop-producing on AGI risk in ways xyz as “really being largely about” OpenPhil/Moskovitz/[some sort of outside view impression on AI risk that maybe controls these] being wrong/confused/thoughtless/slop-liking on AGI risk in ways x’y’z′.
I’ve been meaning to spend at least a few weeks thinking these sorts of questions through carefully, but I haven’t gotten around to that yet. I should maybe seek out some interesting [left-hegelians]/marxists/communists/socialists to talk to and try to understand how they’d think about these things.
Under this view, political/economic systems that produce less growth but don’t create the incentives for unbounded competition are preferred. Sadly, for Molochian reasons this seems hard to pull off.
Imo one interesting angle of attack on this question is: it seems plausible/likely that an individual human could develop for a very long time without committing suicide with AI or otherwise (imo unlike humanity as it is currently organized); we should be able to understand what differences between a human and society are responsible for this — like, my guess is that there is a small set of properties here that could be identified; we could try to then figure out what the easiest way is to make humanity have these properties.
By saying this, I don’t mean to imply that LW is incorrect/bad to be very pro-capitalism. Whether it is bad is mostly a matter of whether it is incorrect, and whether it is incorrect is an open question to me.
I guess this post of mine is the closest thing that quickly comes to mind when I try to think of something carrying that vibe, but it’s still really quite far.
I find it interesting and unfortunate that there aren’t more economically left-wing thinkers influenced by Yudkowsky/LW thinking about AGI.
Maybe it’s just my bubble—and I really do not want to offend anyone, only to report honestly on what I observe around me—understanding economics seems right-wing coded. More precisely, when I talk to right-wing people about economics, there is a mix of descriptive and normative, but when I talk to left-wing people about economics, it is normative only: what should be done, in their opinion, often ignoring the second-order effects. Describing the economics as it is, seems like expressing approval; and approving of capitalism is right-wing.
Basically, if you made a YouTube video containing zero opinion on how things should be, only explaining the basic things about supply and demand (like, how scarcity makes things more expensive in a free market) and similar stuff, people listening to the video would label you as right-wing. Many of those who identify as left-wing would even dismiss the video as right-wing propaganda.
So, if my understanding is correct, this seems like a problem the left wing needs to solve internally. There is not much we can do as rationalists when someone makes not understanding something a signal of loyalty.
I find it interesting and unfortunate that there aren’t more economically left-wing thinkers influenced by Yudkowsky/LW thinking about AGI.
I noticed this too. In defence of LW, the Overton window here isn’t as tightly policed as in other places on the internet, but it’s noticeable. Recently, I seem to have found some of its edges here and here.
“Follow the money” is a good instinct, but I do think a lot of it is just memes fighting other memes using their hosts. A lot of this plays itself out by manipulating credibility signals (i.e. the voting mechanism).
Ultimately there’s nothing any of us can do other than to follow, interrogate and stress-test the arguments being made.
I find it interesting and unfortunate that there aren’t more economically left-wing thinkers influenced by Yudkowsky/LW thinking about AGI. It seems like a very natural combination given e.g. “Marx subsequently developed an influential theory of history—often called historical materialism—centred around the idea that forms of society rise and fall as they further and then impede the development of
humanproductive power.”. It seems likely that LW being very pro-capitalism has meaningfully contributed to the lack of these sorts of people. [1] I guess ACS carries sth like this vibe. But (unlike ACS) it also seems natural to apply this sort of view of history to AI except also thinking that fooming will be fast. [2]Relatedly, I wonder if I should be “following the money” more when thinking about AI risk. In particular, instead of saying that “AI researchers/companies” will disempower humanity, maybe it would be appropriate to instead or additionally say “(AI )capitalists and capital and capitalism”. My current guess is that while it is appropriate to place a bunch of blame on these, it’s also true that e.g. Soviet or Chinese systems [wouldn’t be]/aren’t doing better, so I’ve mostly avoided saying this so far. That said, my guess is that if the world were much more like Europe, we would be dying with significantly more dignity, in part due to Europe getting some hyperparameters of governance+society+culture+life more right due to blind luck, but also actually in part due to getting some hyperparameters right because of good reasoning that was basically tracking something logically connected to AI risk (though so far not significantly explicitly tracking AI risk), e.g. via humanism. Another example of a case where I wonder if I should follow the money more is: to what extent should I think of Constellation being wrong/confused/thoughtless/slop-producing on AGI risk in ways xyz as “really being largely about” OpenPhil/Moskovitz/[some sort of outside view impression on AI risk that maybe controls these] being wrong/confused/thoughtless/slop-liking on AGI risk in ways x’y’z′.
I’ve been meaning to spend at least a few weeks thinking these sorts of questions through carefully, but I haven’t gotten around to that yet. I should maybe seek out some interesting [left-hegelians]/marxists/communists/socialists to talk to and try to understand how they’d think about these things.
Imo one interesting angle of attack on this question is: it seems plausible/likely that an individual human could develop for a very long time without committing suicide with AI or otherwise (imo unlike humanity as it is currently organized); we should be able to understand what differences between a human and society are responsible for this — like, my guess is that there is a small set of properties here that could be identified; we could try to then figure out what the easiest way is to make humanity have these properties.
By saying this, I don’t mean to imply that LW is incorrect/bad to be very pro-capitalism. Whether it is bad is mostly a matter of whether it is incorrect, and whether it is incorrect is an open question to me.
I guess this post of mine is the closest thing that quickly comes to mind when I try to think of something carrying that vibe, but it’s still really quite far.
Maybe it’s just my bubble—and I really do not want to offend anyone, only to report honestly on what I observe around me—understanding economics seems right-wing coded. More precisely, when I talk to right-wing people about economics, there is a mix of descriptive and normative, but when I talk to left-wing people about economics, it is normative only: what should be done, in their opinion, often ignoring the second-order effects. Describing the economics as it is, seems like expressing approval; and approving of capitalism is right-wing.
Basically, if you made a YouTube video containing zero opinion on how things should be, only explaining the basic things about supply and demand (like, how scarcity makes things more expensive in a free market) and similar stuff, people listening to the video would label you as right-wing. Many of those who identify as left-wing would even dismiss the video as right-wing propaganda.
So, if my understanding is correct, this seems like a problem the left wing needs to solve internally. There is not much we can do as rationalists when someone makes not understanding something a signal of loyalty.
I noticed this too. In defence of LW, the Overton window here isn’t as tightly policed as in other places on the internet, but it’s noticeable. Recently, I seem to have found some of its edges here and here.
“Follow the money” is a good instinct, but I do think a lot of it is just memes fighting other memes using their hosts. A lot of this plays itself out by manipulating credibility signals (i.e. the voting mechanism).
Ultimately there’s nothing any of us can do other than to follow, interrogate and stress-test the arguments being made.