...but it’s not fake, it’s just confused according to your expectations about the future—and yes, some people may say it dishonestly, but we should still be careful not to deny that people can think things you disagree with, just because they conflict with your map of the territory.
That said, I don’t see as much value in dichotomizing the groups as others seem to.
What? Surely “it’s fake” is a fine way to say “most people who would say they are in C are not actually working that way and are deceptively presenting as C”? It’s fake.
If you said “mostly bullshit” or “almost always disengenious” I wouldn’t argue, but would still question whether it’s actually a majority of people in group C, which I’m doubtful of, but very unsure about—but saying it is fake would usually mean it is not a real thing anyone believes, rather than meaning that the view is unusual or confused or wrong.
I guess we could say “mostly fake”, but also there’s important senses in which “mostly fake” implies “fake simpliciter”. E.g. a twinkie made of “mostly poison” is just “a poisonous twinkie”. Often people do, and should, summarize things and then make decisions based on the summaries, e.g. “is it poison, or no” --> “can I eat it, or no”. My guess is that the conditions under which it would make sense for you to treat someone as genuinely holding position C, e.g. for purposes of allocating funding to them, are currently met by approximately no one. I could plausibly be wrong about that, I’m not so confident. But that is the assertion I’m trying to make, which is summarized imprecisely as “C is fake”, and I stand by my making that assertion in this context. (Analogy: It’s possible for me to be wrong that 2+2=4, but when I say 2+2=4, what I’m asserting / guessing is that 2+2=4 always, everywhere, exactly. https://www.lesswrong.com/s/FrqfoG3LJeCZs96Ym/p/ooypcn7qFzsMcy53R )
I am personally uncertain how hard stopping the race is. I have spent some time and money myself trying to promote IABIED, and I have also been trying to do direct alignment research, and when doing so I more often than not think explicitly in scenarios where the AI race does not stop.
Am I in group C? Am I a fake member of C?
I’d personally say I’d probably endorse C for someone who funds research/activism, and have personally basically acted on it.
I.e. I’d say it’s reasonable to say “stop the race ASAP”, and in another context say “the race might not be stopped, what projects would still maybe increase odds of survival/success conditional on a race?”
(IDK and I wouldn’t be the one to judge, and there doesn’t necessarily have to be one to judge.) I guess I’d be a bit more inclined to believe it of you? But it would take more evidence. For example, it would depend how your stances express themselves specifically in “political” contexts, i.e. in contexts where power is at stake (company governance, internal decision-making by an academic lab about allocating attentional resources, public opinion / discussion, funding decisions, hiring advice). And if you don’t have a voice in such contexts then you don’t count as much of a member of C. (Reminder that I’m talking about “camps”, not sets of individual people with propositional beliefs.)
It seems like you’re narrowing the claim, and I’m no longer sure I disagree with the point, if I’m interpreting it correctly now.
If you’re saying that they group doesn’t act differently in ways that are visible to you, sure—but the definition of the group is one that believes that two things are viable, and will sometimes support one side, and sometimes support the other. You could say it doesn’t matter for making individual decisions, because the people functionally are supporting one side or the other at a given time, but that’s different than saying they “are not actually working that way.”
There’s a huge difference between the types of cases, though. A 90% poisonous twinkie is certainly fine to call poisonous[1], but a 90% male groups isn’t reasonable to call male. You said “if most people who would say they are in C are not actually working that way and are deceptively presenting as C,” that seems far like the latter than the former, because “fake” implies the entire thing is fake[2].
Though so is a 1% poisonous twinkie; perhaps the example should be a meal that is 90% protein would be a “protein meal” without implying there is no non-protein substance present.
There is a sense where this isn’t true; if 5% of an image of a person is modified, I’d agree that the image is fake—but this is because the claim of fakeness is about the entirety of the image, as a unit. In contrast, if there were 20 people in a composite image, and 12 of them were AI-fakes and 8 were actual people, I wouldn’t say the picture is “of fake people,” I’d need to say it’s a mixture of fake and real people. Which seems like the relevant comparison if, as you said in another comment, you are describing “empirical clusters of people”!
The OP is about two “camps” of people. Do you understand what camps are? Hopefully you can see that this indeed does induce the analog of “because the claim of fakeness is about the entirety of the image”. They gain and direct funding, consensus, hiring, propaganda, vibes, parties, organizations, etc., approximately as a unit. Camp A is a 90% poison twinkie. The fact that you are trying to not process this is a problem.
I’m pointing out that the third camp, which you deny really exists, does exist, and as an aside, is materially different in important ways from the other two camps.
You say you don’t think this matters for allocating funding, and you don’t care about what others actually believe. I’m just not sure why either point is relevant here.
Could you name a couple (2 or 3, say) of some of the biggest representatives of that camp? Biggest in the camp sense, so e.g. high reputation researchers or high net worth funders.
You started by saying that most people who would say they are in C are fake, because they are not actually working that way and are deceptively presenting as C, and that A is also “fake” because it won’t work. So anyone I name in group C, under your view, is just being dishonest. I think that there are many people who have good faith beliefs in both groups, but don’t understand how naming them helps address the claim you made. (You also said that it only matters if the view exists if it’s held by funders, since I guess you claim that only people spending money can have views about what resource allocation should occur.)
That said, other than myself, who probably doesn’t count because I’m only in charge of minor amounts of money, it seems that a number of people at Open Philanthropy clearly implicitly embrace view C, based on their funding decisions which include geopolitical efforts to manage risk from AI and potentially lead to agreements, public awareness and education, and also funding technical work on AI safety.
...but it’s not fake, it’s just confused according to your expectations about the future—and yes, some people may say it dishonestly, but we should still be careful not to deny that people can think things you disagree with, just because they conflict with your map of the territory.
That said, I don’t see as much value in dichotomizing the groups as others seem to.
What? Surely “it’s fake” is a fine way to say “most people who would say they are in C are not actually working that way and are deceptively presenting as C”? It’s fake.
If you said “mostly bullshit” or “almost always disengenious” I wouldn’t argue, but would still question whether it’s actually a majority of people in group C, which I’m doubtful of, but very unsure about—but saying it is fake would usually mean it is not a real thing anyone believes, rather than meaning that the view is unusual or confused or wrong.
Closely related to: You Don’t Exist, Duncan.
I guess we could say “mostly fake”, but also there’s important senses in which “mostly fake” implies “fake simpliciter”. E.g. a twinkie made of “mostly poison” is just “a poisonous twinkie”. Often people do, and should, summarize things and then make decisions based on the summaries, e.g. “is it poison, or no” --> “can I eat it, or no”. My guess is that the conditions under which it would make sense for you to treat someone as genuinely holding position C, e.g. for purposes of allocating funding to them, are currently met by approximately no one. I could plausibly be wrong about that, I’m not so confident. But that is the assertion I’m trying to make, which is summarized imprecisely as “C is fake”, and I stand by my making that assertion in this context. (Analogy: It’s possible for me to be wrong that 2+2=4, but when I say 2+2=4, what I’m asserting / guessing is that 2+2=4 always, everywhere, exactly. https://www.lesswrong.com/s/FrqfoG3LJeCZs96Ym/p/ooypcn7qFzsMcy53R )
Just to clarify:
I am personally uncertain how hard stopping the race is. I have spent some time and money myself trying to promote IABIED, and I have also been trying to do direct alignment research, and when doing so I more often than not think explicitly in scenarios where the AI race does not stop.
Am I in group C? Am I a fake member of C?
I’d personally say I’d probably endorse C for someone who funds research/activism, and have personally basically acted on it.
I.e. I’d say it’s reasonable to say “stop the race ASAP”, and in another context say “the race might not be stopped, what projects would still maybe increase odds of survival/success conditional on a race?”
(IDK and I wouldn’t be the one to judge, and there doesn’t necessarily have to be one to judge.) I guess I’d be a bit more inclined to believe it of you? But it would take more evidence. For example, it would depend how your stances express themselves specifically in “political” contexts, i.e. in contexts where power is at stake (company governance, internal decision-making by an academic lab about allocating attentional resources, public opinion / discussion, funding decisions, hiring advice). And if you don’t have a voice in such contexts then you don’t count as much of a member of C. (Reminder that I’m talking about “camps”, not sets of individual people with propositional beliefs.)
It seems like you’re narrowing the claim, and I’m no longer sure I disagree with the point, if I’m interpreting it correctly now.
If you’re saying that they group doesn’t act differently in ways that are visible to you, sure—but the definition of the group is one that believes that two things are viable, and will sometimes support one side, and sometimes support the other. You could say it doesn’t matter for making individual decisions, because the people functionally are supporting one side or the other at a given time, but that’s different than saying they “are not actually working that way.”
There’s a huge difference between the types of cases, though. A 90% poisonous twinkie is certainly fine to call poisonous[1], but a 90% male groups isn’t reasonable to call male. You said “if most people who would say they are in C are not actually working that way and are deceptively presenting as C,” that seems far like the latter than the former, because “fake” implies the entire thing is fake[2].
Though so is a 1% poisonous twinkie; perhaps the example should be a meal that is 90% protein would be a “protein meal” without implying there is no non-protein substance present.
There is a sense where this isn’t true; if 5% of an image of a person is modified, I’d agree that the image is fake—but this is because the claim of fakeness is about the entirety of the image, as a unit. In contrast, if there were 20 people in a composite image, and 12 of them were AI-fakes and 8 were actual people, I wouldn’t say the picture is “of fake people,” I’d need to say it’s a mixture of fake and real people. Which seems like the relevant comparison if, as you said in another comment, you are describing “empirical clusters of people”!
The OP is about two “camps” of people. Do you understand what camps are? Hopefully you can see that this indeed does induce the analog of “because the claim of fakeness is about the entirety of the image”. They gain and direct funding, consensus, hiring, propaganda, vibes, parties, organizations, etc., approximately as a unit. Camp A is a 90% poison twinkie. The fact that you are trying to not process this is a problem.
I’m pointing out that the third camp, which you deny really exists, does exist, and as an aside, is materially different in important ways from the other two camps.
You say you don’t think this matters for allocating funding, and you don’t care about what others actually believe. I’m just not sure why either point is relevant here.
Could you name a couple (2 or 3, say) of some of the biggest representatives of that camp? Biggest in the camp sense, so e.g. high reputation researchers or high net worth funders.
You started by saying that most people who would say they are in C are fake, because they are not actually working that way and are deceptively presenting as C, and that A is also “fake” because it won’t work. So anyone I name in group C, under your view, is just being dishonest. I think that there are many people who have good faith beliefs in both groups, but don’t understand how naming them helps address the claim you made. (You also said that it only matters if the view exists if it’s held by funders, since I guess you claim that only people spending money can have views about what resource allocation should occur.)
That said, other than myself, who probably doesn’t count because I’m only in charge of minor amounts of money, it seems that a number of people at Open Philanthropy clearly implicitly embrace view C, based on their funding decisions which include geopolitical efforts to manage risk from AI and potentially lead to agreements, public awareness and education, and also funding technical work on AI safety.
And see this newer post, which also lays out a similar view: https://www.lesswrong.com/posts/7xCxz36Jx3KxqYrd9/plan-1-and-plan-2