I truly have no settled opinion on anthropic or you, and i get it’s annoying to zoom in on something minor/unimportant like this, but:
i think your lack of charity here is itself a kind of propaganda, a non-truth seeking behavior. even in your mark zuckerberg hypothetical, considered purely “technically”/locally, if he’s not lying in that scenario, i don’t think it’s appropriate to call it propaganda. but of course in that world, this world, he is overwhelming likely to be lying or mind-warped past the point of the difference mattering.
are you not ambiguating/motte-and-bailey-ing between what seems three possible meanings for propaganda ? : 1) something like lies 2) doing something that’s also beneficial to you financially 3) stating something without evidence
you know the connotation “propaganda” conveys (1 and general badness), but you fell back on 2 and 3.
also while you may not agree with them, you know there are plenty of arguments (proposed evidence) for why we can’t stop. you are therefore being disingenuous. must they be penned in-toto by dario to count? also didn’t he put serious effort behind machines of loving grace? (I haven’t read it yet). this isn’t an ‘out of the blue’ and/or ‘obvious bullshit’ position like the zuck hypothetical; the whole AI world is debating/split on this issue; it is reasonably possible he really believes it, etc.
…
edit: saw your comment change just as i posted reply. not your fault tbc, just explaining why some of the content of my reply refers to things that no longer exist
I don’t think that propaganda must necessarily involve lying. By “propaganda,” I mean aggressively spreading information or communication because it is politically convenient / useful for you, regardless of its truth (though propaganda is sometimes untrue, of course).
When a government puts up posters saying “Your country needs YOU” this is intended to evoke a sense of duty and a sense of glory to be had; sometimes this sense of duty is appropriate, but sometimes your country wants you to participate in terrible wars for bad reasons. The government is saying it loudly because for them it’s convenient for you to think that way, and that’s not particularly correlated with the war being righteous or with the people who decided to make such posters even having thought much about that question. They’re saying it to win a war, not to inform their populace, and that’s why it’s propaganda.
Returning to the Amodei blogpost: I’ll happily concede that you don’t always need to give reasons for your beliefs when expressing them—context matters. But in every context—tweets, podcasts, ads, or official blogposts—there’s a difference between sharing something to inform and sharing it to push a party line.
I claim that many people have asked why Anthropic believes it’s ethical for them to speed up AI progress (by contributing to the competitive race), and Anthropic have rarely-if-ever given a justification of it. Senior staff keep indicating that not building AGI is not on the table, yet they rarely-if-ever show up to engage with criticism or to give justifications for this in public discourse. This is a key reason why it reads to me as propaganda, because it’s an incredibly convenient belief for them and they state it as though any other position is untenable, without argument and without acknowledging or engaging with the position that it is ethically wrong to speed up the development of a technology they believe has a 10-20% chance of causing human extinction (or a similarly bad outcome).
I wish that they would just come out, lay out the considerations for and against building a frontier lab that is competing to reach the finish line first, acknowledge other perspectives and counterarguments, and explain why they made the decision they have made. This would do wonders for the ability to trust them.
(Relatedly, I don’t believe the Machines of Loving Grace essay is defending the position that speeding up AI is good; the piece in fact explicitly says it will not assess the risks of AI. Here are my comments at the time on that essay also being propaganda.)
haha: agreed :) my rage there was around a different level/kind of fakery from anthropic, but i see now how this could connect-with/be part of a broader pattern that i wasn’t aware of. remaining quibbles aside, i was wrong; this would be sufficient context/justification for using “propaganda”.
I truly have no settled opinion on anthropic or you, and i get it’s annoying to zoom in on something minor/unimportant like this, but:
i think your lack of charity here is itself a kind of propaganda, a non-truth seeking behavior. even in your mark zuckerberg hypothetical, considered purely “technically”/locally, if he’s not lying in that scenario, i don’t think it’s appropriate to call it propaganda. but of course in that world, this world, he is overwhelming likely to be lying or mind-warped past the point of the difference mattering.
are you not ambiguating/motte-and-bailey-ing between what seems three possible meanings for propaganda ? :
1) something like lies
2) doing something that’s also beneficial to you financially
3) stating something without evidence
you know the connotation “propaganda” conveys (1 and general badness), but you fell back on 2 and 3.
also while you may not agree with them, you know there are plenty of arguments (proposed evidence) for why we can’t stop. you are therefore being disingenuous. must they be penned in-toto by dario to count? also didn’t he put serious effort behind machines of loving grace? (I haven’t read it yet). this isn’t an ‘out of the blue’ and/or ‘obvious bullshit’ position like the zuck hypothetical; the whole AI world is debating/split on this issue; it is reasonably possible he really believes it, etc.
…
edit: saw your comment change just as i posted reply. not your fault tbc, just explaining why some of the content of my reply refers to things that no longer exist
I don’t think that propaganda must necessarily involve lying. By “propaganda,” I mean aggressively spreading information or communication because it is politically convenient / useful for you, regardless of its truth (though propaganda is sometimes untrue, of course).
When a government puts up posters saying “Your country needs YOU” this is intended to evoke a sense of duty and a sense of glory to be had; sometimes this sense of duty is appropriate, but sometimes your country wants you to participate in terrible wars for bad reasons. The government is saying it loudly because for them it’s convenient for you to think that way, and that’s not particularly correlated with the war being righteous or with the people who decided to make such posters even having thought much about that question. They’re saying it to win a war, not to inform their populace, and that’s why it’s propaganda.
Returning to the Amodei blogpost: I’ll happily concede that you don’t always need to give reasons for your beliefs when expressing them—context matters. But in every context—tweets, podcasts, ads, or official blogposts—there’s a difference between sharing something to inform and sharing it to push a party line.
I claim that many people have asked why Anthropic believes it’s ethical for them to speed up AI progress (by contributing to the competitive race), and Anthropic have rarely-if-ever given a justification of it. Senior staff keep indicating that not building AGI is not on the table, yet they rarely-if-ever show up to engage with criticism or to give justifications for this in public discourse. This is a key reason why it reads to me as propaganda, because it’s an incredibly convenient belief for them and they state it as though any other position is untenable, without argument and without acknowledging or engaging with the position that it is ethically wrong to speed up the development of a technology they believe has a 10-20% chance of causing human extinction (or a similarly bad outcome).
I wish that they would just come out, lay out the considerations for and against building a frontier lab that is competing to reach the finish line first, acknowledge other perspectives and counterarguments, and explain why they made the decision they have made. This would do wonders for the ability to trust them.
(Relatedly, I don’t believe the Machines of Loving Grace essay is defending the position that speeding up AI is good; the piece in fact explicitly says it will not assess the risks of AI. Here are my comments at the time on that essay also being propaganda.)
haha: agreed :)
my rage there was around a different level/kind of fakery from anthropic, but i see now how this could connect-with/be part of a broader pattern that i wasn’t aware of. remaining quibbles aside, i was wrong; this would be sufficient context/justification for using “propaganda”.
I see how it could potentially be the same pattern as people claiming ey hasn’t talked enough about his position; to the person you disagree with, you will never have explained enough. but yeah i doubt any of the labs have