I’m saying that he is presenting it as something he believes from his place of expertise and private knowledge without argument, because it is something that is exceedingly morally and financially beneficial to him (he gets to make massive money and not be a moral monster), rather than because he has any evidence, and he stated it without evidence.
It is a similar sentence to if a President of a country who just initiated war said “If there’s one thing I’ve learned in my life it’s that war is inevitable, and there’s just a question of who wins and how to make sure it’s over quickly”, in a way that means they should be absolved of responsibility for initiating war.
Edit: Just as Casey B was writing his reply below, I edited an example out of Mark Zuckerberg saying something like “If there’s one thing I’ve learned in my career, it’s that social media is good, and the only choice is which sort of good social media we have”. Leaving this note so that ppl aren’t confused by his reply.
I truly have no settled opinion on anthropic or you, and i get it’s annoying to zoom in on something minor/unimportant like this, but:
i think your lack of charity here is itself a kind of propaganda, a non-truth seeking behavior. even in your mark zuckerberg hypothetical, considered purely “technically”/locally, if he’s not lying in that scenario, i don’t think it’s appropriate to call it propaganda. but of course in that world, this world, he is overwhelming likely to be lying or mind-warped past the point of the difference mattering.
are you not ambiguating/motte-and-bailey-ing between what seems three possible meanings for propaganda ? : 1) something like lies 2) doing something that’s also beneficial to you financially 3) stating something without evidence
you know the connotation “propaganda” conveys (1 and general badness), but you fell back on 2 and 3.
also while you may not agree with them, you know there are plenty of arguments (proposed evidence) for why we can’t stop. you are therefore being disingenuous. must they be penned in-toto by dario to count? also didn’t he put serious effort behind machines of loving grace? (I haven’t read it yet). this isn’t an ‘out of the blue’ and/or ‘obvious bullshit’ position like the zuck hypothetical; the whole AI world is debating/split on this issue; it is reasonably possible he really believes it, etc.
…
edit: saw your comment change just as i posted reply. not your fault tbc, just explaining why some of the content of my reply refers to things that no longer exist
I don’t think that propaganda must necessarily involve lying. By “propaganda,” I mean aggressively spreading information or communication because it is politically convenient / useful for you, regardless of its truth (though propaganda is sometimes untrue, of course).
When a government puts up posters saying “Your country needs YOU” this is intended to evoke a sense of duty and a sense of glory to be had; sometimes this sense of duty is appropriate, but sometimes your country wants you to participate in terrible wars for bad reasons. The government is saying it loudly because for them it’s convenient for you to think that way, and that’s not particularly correlated with the war being righteous or with the people who decided to make such posters even having thought much about that question. They’re saying it to win a war, not to inform their populace, and that’s why it’s propaganda.
Returning to the Amodei blogpost: I’ll happily concede that you don’t always need to give reasons for your beliefs when expressing them—context matters. But in every context—tweets, podcasts, ads, or official blogposts—there’s a difference between sharing something to inform and sharing it to push a party line.
I claim that many people have asked why Anthropic believes it’s ethical for them to speed up AI progress (by contributing to the competitive race), and Anthropic have rarely-if-ever given a justification of it. Senior staff keep indicating that not building AGI is not on the table, yet they rarely-if-ever show up to engage with criticism or to give justifications for this in public discourse. This is a key reason why it reads to me as propaganda, because it’s an incredibly convenient belief for them and they state it as though any other position is untenable, without argument and without acknowledging or engaging with the position that it is ethically wrong to speed up the development of a technology they believe has a 10-20% chance of causing human extinction (or a similarly bad outcome).
I wish that they would just come out, lay out the considerations for and against building a frontier lab that is competing to reach the finish line first, acknowledge other perspectives and counterarguments, and explain why they made the decision they have made. This would do wonders for the ability to trust them.
(Relatedly, I don’t believe the Machines of Loving Grace essay is defending the position that speeding up AI is good; the piece in fact explicitly says it will not assess the risks of AI. Here are my comments at the time on that essay also being propaganda.)
haha: agreed :) my rage there was around a different level/kind of fakery from anthropic, but i see now how this could connect-with/be part of a broader pattern that i wasn’t aware of. remaining quibbles aside, i was wrong; this would be sufficient context/justification for using “propaganda”.
I’m saying that he is presenting it as something he believes from his place of expertise and private knowledge without argument, because it is something that is exceedingly morally and financially beneficial to him (he gets to make massive money and not be a moral monster), rather than because he has any evidence, and he stated it without evidence.
It is a similar sentence to if a President of a country who just initiated war said “If there’s one thing I’ve learned in my life it’s that war is inevitable, and there’s just a question of who wins and how to make sure it’s over quickly”, in a way that means they should be absolved of responsibility for initiating war.
Edit: Just as Casey B was writing his reply below, I edited an example out of Mark Zuckerberg saying something like “If there’s one thing I’ve learned in my career, it’s that social media is good, and the only choice is which sort of good social media we have”. Leaving this note so that ppl aren’t confused by his reply.
I truly have no settled opinion on anthropic or you, and i get it’s annoying to zoom in on something minor/unimportant like this, but:
i think your lack of charity here is itself a kind of propaganda, a non-truth seeking behavior. even in your mark zuckerberg hypothetical, considered purely “technically”/locally, if he’s not lying in that scenario, i don’t think it’s appropriate to call it propaganda. but of course in that world, this world, he is overwhelming likely to be lying or mind-warped past the point of the difference mattering.
are you not ambiguating/motte-and-bailey-ing between what seems three possible meanings for propaganda ? :
1) something like lies
2) doing something that’s also beneficial to you financially
3) stating something without evidence
you know the connotation “propaganda” conveys (1 and general badness), but you fell back on 2 and 3.
also while you may not agree with them, you know there are plenty of arguments (proposed evidence) for why we can’t stop. you are therefore being disingenuous. must they be penned in-toto by dario to count? also didn’t he put serious effort behind machines of loving grace? (I haven’t read it yet). this isn’t an ‘out of the blue’ and/or ‘obvious bullshit’ position like the zuck hypothetical; the whole AI world is debating/split on this issue; it is reasonably possible he really believes it, etc.
…
edit: saw your comment change just as i posted reply. not your fault tbc, just explaining why some of the content of my reply refers to things that no longer exist
I don’t think that propaganda must necessarily involve lying. By “propaganda,” I mean aggressively spreading information or communication because it is politically convenient / useful for you, regardless of its truth (though propaganda is sometimes untrue, of course).
When a government puts up posters saying “Your country needs YOU” this is intended to evoke a sense of duty and a sense of glory to be had; sometimes this sense of duty is appropriate, but sometimes your country wants you to participate in terrible wars for bad reasons. The government is saying it loudly because for them it’s convenient for you to think that way, and that’s not particularly correlated with the war being righteous or with the people who decided to make such posters even having thought much about that question. They’re saying it to win a war, not to inform their populace, and that’s why it’s propaganda.
Returning to the Amodei blogpost: I’ll happily concede that you don’t always need to give reasons for your beliefs when expressing them—context matters. But in every context—tweets, podcasts, ads, or official blogposts—there’s a difference between sharing something to inform and sharing it to push a party line.
I claim that many people have asked why Anthropic believes it’s ethical for them to speed up AI progress (by contributing to the competitive race), and Anthropic have rarely-if-ever given a justification of it. Senior staff keep indicating that not building AGI is not on the table, yet they rarely-if-ever show up to engage with criticism or to give justifications for this in public discourse. This is a key reason why it reads to me as propaganda, because it’s an incredibly convenient belief for them and they state it as though any other position is untenable, without argument and without acknowledging or engaging with the position that it is ethically wrong to speed up the development of a technology they believe has a 10-20% chance of causing human extinction (or a similarly bad outcome).
I wish that they would just come out, lay out the considerations for and against building a frontier lab that is competing to reach the finish line first, acknowledge other perspectives and counterarguments, and explain why they made the decision they have made. This would do wonders for the ability to trust them.
(Relatedly, I don’t believe the Machines of Loving Grace essay is defending the position that speeding up AI is good; the piece in fact explicitly says it will not assess the risks of AI. Here are my comments at the time on that essay also being propaganda.)
haha: agreed :)
my rage there was around a different level/kind of fakery from anthropic, but i see now how this could connect-with/be part of a broader pattern that i wasn’t aware of. remaining quibbles aside, i was wrong; this would be sufficient context/justification for using “propaganda”.
I see how it could potentially be the same pattern as people claiming ey hasn’t talked enough about his position; to the person you disagree with, you will never have explained enough. but yeah i doubt any of the labs have