There’s a huge difference between the types of cases, though. A 90% poisonous twinkie is certainly fine to call poisonous[1], but a 90% male groups isn’t reasonable to call male. You said “if most people who would say they are in C are not actually working that way and are deceptively presenting as C,” that seems far like the latter than the former, because “fake” implies the entire thing is fake[2].
Though so is a 1% poisonous twinkie; perhaps the example should be a meal that is 90% protein would be a “protein meal” without implying there is no non-protein substance present.
There is a sense where this isn’t true; if 5% of an image of a person is modified, I’d agree that the image is fake—but this is because the claim of fakeness is about the entirety of the image, as a unit. In contrast, if there were 20 people in a composite image, and 12 of them were AI-fakes and 8 were actual people, I wouldn’t say the picture is “of fake people,” I’d need to say it’s a mixture of fake and real people. Which seems like the relevant comparison if, as you said in another comment, you are describing “empirical clusters of people”!
The OP is about two “camps” of people. Do you understand what camps are? Hopefully you can see that this indeed does induce the analog of “because the claim of fakeness is about the entirety of the image”. They gain and direct funding, consensus, hiring, propaganda, vibes, parties, organizations, etc., approximately as a unit. Camp A is a 90% poison twinkie. The fact that you are trying to not process this is a problem.
I’m pointing out that the third camp, which you deny really exists, does exist, and as an aside, is materially different in important ways from the other two camps.
You say you don’t think this matters for allocating funding, and you don’t care about what others actually believe. I’m just not sure why either point is relevant here.
Could you name a couple (2 or 3, say) of some of the biggest representatives of that camp? Biggest in the camp sense, so e.g. high reputation researchers or high net worth funders.
You started by saying that most people who would say they are in C are fake, because they are not actually working that way and are deceptively presenting as C, and that A is also “fake” because it won’t work. So anyone I name in group C, under your view, is just being dishonest. I think that there are many people who have good faith beliefs in both groups, but don’t understand how naming them helps address the claim you made. (You also said that it only matters if the view exists if it’s held by funders, since I guess you claim that only people spending money can have views about what resource allocation should occur.)
That said, other than myself, who probably doesn’t count because I’m only in charge of minor amounts of money, it seems that a number of people at Open Philanthropy clearly implicitly embrace view C, based on their funding decisions which include geopolitical efforts to manage risk from AI and potentially lead to agreements, public awareness and education, and also funding technical work on AI safety.
There’s a huge difference between the types of cases, though. A 90% poisonous twinkie is certainly fine to call poisonous[1], but a 90% male groups isn’t reasonable to call male. You said “if most people who would say they are in C are not actually working that way and are deceptively presenting as C,” that seems far like the latter than the former, because “fake” implies the entire thing is fake[2].
Though so is a 1% poisonous twinkie; perhaps the example should be a meal that is 90% protein would be a “protein meal” without implying there is no non-protein substance present.
There is a sense where this isn’t true; if 5% of an image of a person is modified, I’d agree that the image is fake—but this is because the claim of fakeness is about the entirety of the image, as a unit. In contrast, if there were 20 people in a composite image, and 12 of them were AI-fakes and 8 were actual people, I wouldn’t say the picture is “of fake people,” I’d need to say it’s a mixture of fake and real people. Which seems like the relevant comparison if, as you said in another comment, you are describing “empirical clusters of people”!
The OP is about two “camps” of people. Do you understand what camps are? Hopefully you can see that this indeed does induce the analog of “because the claim of fakeness is about the entirety of the image”. They gain and direct funding, consensus, hiring, propaganda, vibes, parties, organizations, etc., approximately as a unit. Camp A is a 90% poison twinkie. The fact that you are trying to not process this is a problem.
I’m pointing out that the third camp, which you deny really exists, does exist, and as an aside, is materially different in important ways from the other two camps.
You say you don’t think this matters for allocating funding, and you don’t care about what others actually believe. I’m just not sure why either point is relevant here.
Could you name a couple (2 or 3, say) of some of the biggest representatives of that camp? Biggest in the camp sense, so e.g. high reputation researchers or high net worth funders.
You started by saying that most people who would say they are in C are fake, because they are not actually working that way and are deceptively presenting as C, and that A is also “fake” because it won’t work. So anyone I name in group C, under your view, is just being dishonest. I think that there are many people who have good faith beliefs in both groups, but don’t understand how naming them helps address the claim you made. (You also said that it only matters if the view exists if it’s held by funders, since I guess you claim that only people spending money can have views about what resource allocation should occur.)
That said, other than myself, who probably doesn’t count because I’m only in charge of minor amounts of money, it seems that a number of people at Open Philanthropy clearly implicitly embrace view C, based on their funding decisions which include geopolitical efforts to manage risk from AI and potentially lead to agreements, public awareness and education, and also funding technical work on AI safety.
And see this newer post, which also lays out a similar view: https://www.lesswrong.com/posts/7xCxz36Jx3KxqYrd9/plan-1-and-plan-2