I don’t think it’s possible for mere mortals to use Twitter for news about politics or current events and not go a little crazy. At least, I have yet to find a Twitter user who regularly or irregularly talks about these things, and fails to boost obvious misinformation every once in a while. It doesn’t matter what IQ they have or how rational they were in 2005; Twitter is just too chock full of lies, mischaracterizations, telephone games, and endless, endless, endless malicious selection effects, which by the time you’re done using it are designed to appeal to whichever reader in particular you are. It’s just impossible to use the site as people normally do and also practice the necessary skepticism about each individual post one is reading.
It doesn’t matter what IQ they have or how rational they were in 2005
This is a reference to Eliezer, right? I really don’t understand why he’s on Twitter so much. I find it quite sad to see one of my heroes slipping into the ragebait Twitter attractor.
Only inasmuch he’s a proof-by-example. By that I mean he’s one of the most earnest/truthseeking users I found when I was still using the platform, and yet he still manages to retweet things outside his domain of expertise that are either extraordinarily misleading or literally, factually incorrect—and I think if you sat him down and prompted him to think about the individual cases he would probably notice why, he just doesn’t because the platform isn’t conducive to that kind of deliberate thought.
I recall a rationalist I know chiding Eliezer for his bad tweeting, and then Eliezer asked him to show him an example of a recent tweet that was bad, and then the rationalist failed to find anything especially bad.
Perhaps this has changed in the 2-3 years since that event. But I’d be interested in an example of a tweet you (lc) thought was bad.
It’s not the tweets, it’s the retweets. People’s tweets on Twitter are usually not that bad. Their retweets, and, for slightly crazier people, their quote tweets are what contain the bizarre mischaracterizations, because they’re the pulls from the top of the attention-seeking crab bucket.
I run a company that sells security software to large enterprises. I remember seeing this (since deleted) post Eliezer retweeted last year during the Crowdstrike blue screen incident, and thinking: “Am I crazy? What on earth is this guy talking about?”
The audit requirements Mark is talking about don’t exist. He just completely made them up. ChatGPT’s explanation here is correct; even if you’re selling to the federal government[1], there’s no “fast track” for big names like Crowdstrike. At absolute maximum your auditor is going to ask for evidence that you use some IDS solution, and you’ll have to gather the same evidence no matter what solution you’re using.
Now, Yudkowsky is not a mendacious person, and he isn’t going to pump misinfo into the ether himself. But naturally if anybody goes on Twitter long enough they’re gonna see stuff like this, and it will just feel plausible to you. It will pass whatever cogsec antimalware blacklists & heuristics you’ve developed for assessing the credibility of things on the internet.
Probably because, like, if you overheard this kind of thing at a party, it would be credible! It’s only on this platform, where people are literally stepping over one another to concoct absurd lies for attention, where people are additionally incentivized to present as having personal expertise in the lie to go slightly more viral, and then an algorithm is selectively boosting the people that do that well enough and effectively enough to the top of your feed, that you encounter this nonsense. And then it goes into your world model, and the next time you see someone claim some crazy thing about how the food industry is in cahoots with Big Chicken you’re more likely to believe it, etc. etc.
And the vast majority of software companies, to be clear, don’t have to do anything like FedRAMP. The largest and most ubiquitous compliance frameworks, like SOC2 or ISO 27001, are self-imposed standards maintained by nonprofits like the AICPA and have nothing to do with the government.
Appreciate the example. I remember reading that retweet!
At the time it sounded plausible to me, and I assumed it was accurate about certain industries.
I’m interested in understanding a bit more what’s going on here. Are we sure you’re talking about the same kinds of companies? I’d guess you’re dealing with companies in the range of 2k-20k employees, and I think Crowdstrike was substantially affecting companies in the range of 20k-200k employees (or at least that’s what I thought of when I saw this tweet), where I imagine auditors have to use much more broad-brush tools to do auditing.
The sorts of companies I imagine as having this kind of broad-strokes audit are extremely broad service industries – airlines, trains, grocery stores, banks, hospitals – where my impression is they often use very old software and buggy hardware due to their overwhelming size and sloth, and where I suspect that a lot of decisions get made by the minimum possible thing required to meet some formal requirements.
The purpose of these audits is not generally to verify with certainty that you’re doing everything you say. That would be very hard, maybe impossible. Mostly you fill out a form saying you’re doing X. If it turns out later after a breach you weren’t doing the stuff you claimed to do during the audit, you’re sued.
We’re doing a PoC right now for a company with >400,000 employees. I am not their security team, and we haven’t sold yet, but on our end everything we’ve run into is normal procurement BS. The main thing that happens as you start to sell to larger customers is that you have to fill out a lot of forms saying your product does not use slave labor and such.
The “you’re sued” part is part of what ensures that the forms get filled out honestly and comprehensively.
Depending on the kind of audit you do, the actual deliverable you give your auditor may just be a spreadsheet with a bunch of Y/N answers to hundreds of questions like “Do all workstations have endpoint protection software”, “Do all servers have intrusion detection software”, etc. with screenshots of dashboards as supporting evidence for some of them.
But regardless of how much evidence an external auditor asks for, at large companies doing important audits, every single thing you say to the auditor will be backed internally with supporting evidence and justification for each answer you give.
At a bank you might have an “internal audit” department that has lots of meetings and back-and-forth with your IT department; at an airline it might be a consulting firm that you bring in to modernize your IT and help you handle the audit, or, depending on your relationship with your auditor and the nature of the audit, it might be someone from the audit firm itself that is advising you. In each case, their purpose is to make sure that every machine across your firm really does have correctly configured EDR, fully up to date security patches, firewalled, etc. before you claim that officially to an auditor.
Maybe you have some random box used to show news headlines on TVs in the hallways—turns out these are technically in-scope for having EDR and all sorts of other endpoint controls, but they’re not compatible with or not correctly configured to run Microsoft Defender, or something. Your IT department will say that there are various compensating / mitigating controls or justifications for why they’re out of scope, e.g. the firewall blocks all network access except the one website they need to show the news, the hardware itself is in a locked IT closet, they don’t even have a mouse / keyboard plugged in, etc. These justifications will usually be accepted unless you get a real stickler (or have an obstinate “internal auditor”). But it’s a lot easier to just say “they all run CrowdStrike” than it is to keep track of all these rationales and compensating controls, and indeed ease-of-deployment is literally the first bullet in CrowdStrike’s marketing vs. Microsoft Defender:
CrowdStrike: Deploy instantly with a single, lightweight agent — no OS prerequisites, complex configuration, or fine tuning required. Microsoft: Complicated deployment hinders security. All endpoints require the premium edition of the latest version of Windows, requiring upfront OS and hardware upgrades for full security functionality.
You wrote in a sibling reply:
Further, the larger implication of the above tweet is that companies use Crowdstrike because of regulatory failure, and this is also simply untrue. There are lots of reasons people sort of unthinkingly go with the name brand option in security, but that’s a normal enterprise software thing and not anything specific to compliance.
I agree that this has little to do with “regulatory failure” and don’t know / don’t have an opinion on whether that’s what the original tweet author was actually trying to communicate. But my point is that firms absolutely do make purchasing decisions about security software for compliance reasons, and a selling point of CrowdStrike (and Carbon Black, and SentinelOne) is that they make 100% compliance easier to achieve and demonstrate vs. alternative solutions. That’s not a regulatory failure or even necessarily problematic, but it does result in somewhat different outcomes compared to a decision process of “unthinkingly going with the name brand option” or “carefully evaluate and consider only which solutions provide the best actual security vs. which are theater”.
The audit requirements Mark is talking about don’t exist. He just completely made them up.
The screenshotted tweet says that you’re required to install something like Crowdstrike, which is correct and also seems consistent with the ChatGPT dialogue you linked?
There are long lists of computer security practices and procedures needed to pass an audit for compliance with a standard like ISO27001, PCI DSS, SOC 2, etc. that many firms large and small are subject to (sometimes but not necessarily by law—e.g. companies often need to pass an SOC 2 audit because their customers ask for it).
As you say, none of these standards name specific software or vendors that you have to use in order to satisfy an auditor, but it’s often much less of a headache to use a “best in class” off-the-shelf product (like CrowdStrike) that is marketed specifically as satisfying specific requirements in these standards, vs. trying to cobble together a complete compliance posture using tools or products that were not designed specifically to satisfy those requirements.
A big part of the marketing for a product like CrowdStrike is that it has specific features which precisely and unambiguously satisfy more items in various auditor checklists than competitors.
So “opens up an expensive new chapter of his book” is colorful and somewhat exaggerated, but I wouldn’t describe it as “misinformation”—it’s definitely pointing at something real, which is that a lot of enterprise security software is sold and bought as an exercise in checking off specific checklist items in various kinds of audits, and how easy / convenient / comprehensive a solution makes box-checking is often a bigger selling point than how much actual security it provides, or what the end user experience is actually like.
There is no such thing as directionally correct. The tweet says “If you use Crowdstrike, your auditor checks a single line and moves on. If you use anything else, your auditor opens up an expensive new Chapter of his book.” This is literally and unambiguously false. No security/compliance standard—public or private—requires additional labor or verification procedures for non-Crowdstrike EDR alternatives. The steps to pass an audit are the same no matter what solution you’re using for EDR. Further, as a vendor, there is a very low ceiling for supporting most of the evidence collection required for any of the standards you cite, even at large scale. Allowing your users to collect such evidence (often, just screenshotting a stats page) is not nearly the largest barrier to entry for new incumbents in Crowdstrike’s space.
Further, the larger implication of the above tweet is that companies use Crowdstrike because of regulatory failure, and this is also simply untrue. There are lots of reasons people sort of unthinkingly go with the name brand option in security, but that’s a normal enterprise software thing and not anything specific to compliance.
I also think that a more insidious problem with Twitter than misinfo is the way it teaches you to think. There are certain kinds of arguments people make and positions people hold which very clearly are there because of Twitter (though not necessarily becuase they read them on Twitter). They are usually sub-par, simple-minded, and very vibes (read: not evidence) based. A common example here is the “we’re so back” sort of hype-talk.
I will add that this problem is the most good faith version of the complaints with “woke” media/fiction (the bad faith one being of course people who simply don’t like any progressive ideas at all, no matter how they’re packaged). Writers and creatives in general spend a lot of time in the Twitter/X bubble and learn these patterns, then have their characters speak the same way, or follow the same logic. To anyone who isn’t deeply embedded in the same bubble and looking for validation, this ends up feeling extremely stilted and unnatural at best, and downright deranged at worst.
I will add that this problem is the most good faith version of the complaints with “woke” media/fiction (the bad faith one being of course people who simply don’t like any progressive ideas at all, no matter how they’re packaged)
I’ll avoid specific examples to reduce the risk of derailing the thread, but I would define “woke” as “prioritizing waging identity-group conflict above other values”. A piece of fiction has many dimensions on which it could be good or bad: novelty, consistency, believability, immersion, likability of characters, depth of characters, emotional range from plot events, predictability of plot, humor, and many more. A woke piece of fiction would be one in which it’s clear that many decisions have been made by a woke ethos, which considers it a good tradeoff to make significant sacrifices on those dimensions in order to advance its preferred identity-group conflict(s); the more woke, the more extreme those sacrifices.
I think there are many issues with the term in how it’s used to mean very different things, but what I’m referring to specifically is a stylistic trait—there are certain kinds of language, of tone etc that tend to be more common in art produced clearly with that specific slant, that are not necessarily perceived as trade-offs rather than just “this is how my tribe talks”. And that’s the one thing where I say there’s a huge disconnect, because it’s often just specifically the language of Twitter politics; the same terminology, the same arguments (with the same flaws, if that’s the case), and the same snappy, performative tone going for the “own” over anything else.
From my limited experience following AI events, agreed. Whole storms of nonsense can be generated by some random accounts posting completely non-credible claims, some people unthinkingly amplifying those, then other people seeing that they are being amplified, thinking it means there’s something to them, amplifying them further, etc.
In my experience, if I look at the Twitter account of someone I respect, there’s a 70–90% chance that Twitter turns them into a sort of Mr. Hyde self who’s angrier, less thoughtful, and generally much worse epistemically. I’ve noticed this tendency in myself as well; historically I tried pretty hard to avoid writing bad tweets, and avoid reading low-quality Twitter accounts, but I don’t think I succeeded, and recently I gave up and just blocked Twitter using LeechBlock.
I’m sad about this because I think Twitter could be really good, and there’s a lot of good stuff on it, but there’s too much bad stuff.
I follow a hardline no-Twitter policy. I don’t visit Twitter at all, and if a post somewhere else has a screenshot of a tweet I’ll scroll past without reading it. There are some writers like Zvi whom I’ve stopped reading because their writing is too heavily influenced by Twitter and quotes too many tweets.
At least, I have yet to find a Twitter user who regularly or irregularly talks about these things, and fails to boost obvious misinformation every once in a while.
Feel free to pass on this, but I would be interested in hearing about what obvious misinformation I’ve boosted if the spirit moves you to look.
Just about the sanest thing I was able to do while under Twitter’s influence was quitting it. The thing is a memetic infohazard. It is a Keter-class SCP. It is a window through which Eldritch gods scream into the void left after they ate a person’s soul, and its allure draws all to their own end in madness and despair.
I use twitter a lot (maybe 45 minutes a day on average), and I don’t think I do that. I don’t think I boost misinformation. Unless replying to people who spread misinformation to argue with them counts.
Feel free to look through and prove me wrong. I think you might be able to find tweets I’ll feel somewhat bad about if you post here, but I think they’d be me calling someone an idiot or something, not me spreading misinformation.
There are other people who I think this applies to too. Like @Isaac King or isaac king I think are very active twitter users who are reasonable. Even Eliezer who you seem to be pointing to as an example of someone who is negatively affected by twitter, I think is not really very bad. I think his twitter conduct is less than literally perfect, but I can’t remember him boosting misinformation in a clear cut way.
Or, I can remember a few instances spreading what I’d call “misinformation”, like saying, without caveats, that “saturated fat is healthier than polyunsaturated fats” (paraphrase it might’ve been just unsaturated fats), but I think he sincerely believes that, and not because of twitter, so its not an example of what you’re talking about.
FWIW I would agree that Twitter is probably at least slightly bad for almost everyone. Those who are reasonable on Twitter are probably only so because they’re even more reasonable in other fora.
Edit: Bad in the particular way being discussed. It can be good in other ways, like learning new information about the world.
I think there’s some about of misinformation or wrong facts that you’ll believe when you read enough things. Maybe twitter users who use it for news etc. end up with a higher % of incorrect views about the world, but I think anyone who reads the news regularly if only just from reputable sources (ie. print) will have weird beliefs
This framing underplays the degree to which the site is designed to produce misleading propaganda. The primary content creators are people who literally do that as a full time job.
Like, I’ll show you a common pattern of how it happens. It’s an extremely unfortunate example because a person involved has just died, but it’s the first one I found, and I feel like it’s representative of how political discourse happens on the platform:
First I’ll explain what’s actually misleading about this so I can make my broader point. The quote tweeted account, “Right Angle News Network”, reports that “The official black lives matter account has posted a video stating that black people ‘have a right to violence’ amid… the slaying of Iryna Zarutska”. The tweet is designed so that, while technically correct, it appears to be saying the video is about Iryna’s murder. But actually:
The video the account posted is taken from a movie made forty years ago.
The account doesn’t reference the murder at all. The only connection that the post has to the murder is that it was made a few days after it happened, which I guess means that it was posted “amid” the murder.
As is typical, the agitator’s tweet (which was carefully designed not to be an explicit lie), is then “quote tweeted” and rephrased by a larger account, who attempts to package the message for more virality. In this case the person just says “Official Black Lives Matter account justifying the murder of Iryna Zarutska”. But that’s not actually established at all! The quote tweeter is just reading a certainty into a tweet that was deliberately engineered to be misread.
This pattern happens everywhere, for every socially charged topic, on every side. “Your enemies are saying X horrible shit” is possibly the most common form of slander on Twitter. It happens especially often when people are posting about stuff that happens on other platforms, because there it’s extremely easy to lack context or mislead people about what’s going on.
“Your enemies are saying X horrible shit” is possibly the most common form of slander on Twitter
Possibly, but it’s probably also simply true most of the time. Usually, you can simply quote tweet them (or post screenshots) saying the thing you’re accusing them of saying. Sure, sometimes, it’s missing relevant context, but that’s relatively rare: normally, your enemies really are saying the horrible things.
I don’t think it’s possible for mere mortals to use Twitter for news about politics or current events and not go a little crazy. At least, I have yet to find a Twitter user who regularly or irregularly talks about these things, and fails to boost obvious misinformation every once in a while. It doesn’t matter what IQ they have or how rational they were in 2005; Twitter is just too chock full of lies, mischaracterizations, telephone games, and endless, endless, endless malicious selection effects, which by the time you’re done using it are designed to appeal to whichever reader in particular you are. It’s just impossible to use the site as people normally do and also practice the necessary skepticism about each individual post one is reading.
This is a reference to Eliezer, right? I really don’t understand why he’s on Twitter so much. I find it quite sad to see one of my heroes slipping into the ragebait Twitter attractor.
Only inasmuch he’s a proof-by-example. By that I mean he’s one of the most earnest/truthseeking users I found when I was still using the platform, and yet he still manages to retweet things outside his domain of expertise that are either extraordinarily misleading or literally, factually incorrect—and I think if you sat him down and prompted him to think about the individual cases he would probably notice why, he just doesn’t because the platform isn’t conducive to that kind of deliberate thought.
I recall a rationalist I know chiding Eliezer for his bad tweeting, and then Eliezer asked him to show him an example of a recent tweet that was bad, and then the rationalist failed to find anything especially bad.
Perhaps this has changed in the 2-3 years since that event. But I’d be interested in an example of a tweet you (lc) thought was bad.
It’s not the tweets, it’s the retweets. People’s tweets on Twitter are usually not that bad. Their retweets, and, for slightly crazier people, their quote tweets are what contain the bizarre mischaracterizations, because they’re the pulls from the top of the attention-seeking crab bucket.
I run a company that sells security software to large enterprises. I remember seeing this (since deleted) post Eliezer retweeted last year during the Crowdstrike blue screen incident, and thinking: “Am I crazy? What on earth is this guy talking about?”
The audit requirements Mark is talking about don’t exist. He just completely made them up. ChatGPT’s explanation here is correct; even if you’re selling to the federal government[1], there’s no “fast track” for big names like Crowdstrike. At absolute maximum your auditor is going to ask for evidence that you use some IDS solution, and you’ll have to gather the same evidence no matter what solution you’re using.
Now, Yudkowsky is not a mendacious person, and he isn’t going to pump misinfo into the ether himself. But naturally if anybody goes on Twitter long enough they’re gonna see stuff like this, and it will just feel plausible to you. It will pass whatever cogsec antimalware blacklists & heuristics you’ve developed for assessing the credibility of things on the internet.
Probably because, like, if you overheard this kind of thing at a party, it would be credible! It’s only on this platform, where people are literally stepping over one another to concoct absurd lies for attention, where people are additionally incentivized to present as having personal expertise in the lie to go slightly more viral, and then an algorithm is selectively boosting the people that do that well enough and effectively enough to the top of your feed, that you encounter this nonsense. And then it goes into your world model, and the next time you see someone claim some crazy thing about how the food industry is in cahoots with Big Chicken you’re more likely to believe it, etc. etc.
And the vast majority of software companies, to be clear, don’t have to do anything like FedRAMP. The largest and most ubiquitous compliance frameworks, like SOC2 or ISO 27001, are self-imposed standards maintained by nonprofits like the AICPA and have nothing to do with the government.
Appreciate the example. I remember reading that retweet!
At the time it sounded plausible to me, and I assumed it was accurate about certain industries.
I’m interested in understanding a bit more what’s going on here. Are we sure you’re talking about the same kinds of companies? I’d guess you’re dealing with companies in the range of 2k-20k employees, and I think Crowdstrike was substantially affecting companies in the range of 20k-200k employees (or at least that’s what I thought of when I saw this tweet), where I imagine auditors have to use much more broad-brush tools to do auditing.
The sorts of companies I imagine as having this kind of broad-strokes audit are extremely broad service industries – airlines, trains, grocery stores, banks, hospitals – where my impression is they often use very old software and buggy hardware due to their overwhelming size and sloth, and where I suspect that a lot of decisions get made by the minimum possible thing required to meet some formal requirements.
The purpose of these audits is not generally to verify with certainty that you’re doing everything you say. That would be very hard, maybe impossible. Mostly you fill out a form saying you’re doing X. If it turns out later after a breach you weren’t doing the stuff you claimed to do during the audit, you’re sued.
We’re doing a PoC right now for a company with >400,000 employees. I am not their security team, and we haven’t sold yet, but on our end everything we’ve run into is normal procurement BS. The main thing that happens as you start to sell to larger customers is that you have to fill out a lot of forms saying your product does not use slave labor and such.
The “you’re sued” part is part of what ensures that the forms get filled out honestly and comprehensively.
Depending on the kind of audit you do, the actual deliverable you give your auditor may just be a spreadsheet with a bunch of Y/N answers to hundreds of questions like “Do all workstations have endpoint protection software”, “Do all servers have intrusion detection software”, etc. with screenshots of dashboards as supporting evidence for some of them.
But regardless of how much evidence an external auditor asks for, at large companies doing important audits, every single thing you say to the auditor will be backed internally with supporting evidence and justification for each answer you give.
At a bank you might have an “internal audit” department that has lots of meetings and back-and-forth with your IT department; at an airline it might be a consulting firm that you bring in to modernize your IT and help you handle the audit, or, depending on your relationship with your auditor and the nature of the audit, it might be someone from the audit firm itself that is advising you. In each case, their purpose is to make sure that every machine across your firm really does have correctly configured EDR, fully up to date security patches, firewalled, etc. before you claim that officially to an auditor.
Maybe you have some random box used to show news headlines on TVs in the hallways—turns out these are technically in-scope for having EDR and all sorts of other endpoint controls, but they’re not compatible with or not correctly configured to run Microsoft Defender, or something. Your IT department will say that there are various compensating / mitigating controls or justifications for why they’re out of scope, e.g. the firewall blocks all network access except the one website they need to show the news, the hardware itself is in a locked IT closet, they don’t even have a mouse / keyboard plugged in, etc. These justifications will usually be accepted unless you get a real stickler (or have an obstinate “internal auditor”). But it’s a lot easier to just say “they all run CrowdStrike” than it is to keep track of all these rationales and compensating controls, and indeed ease-of-deployment is literally the first bullet in CrowdStrike’s marketing vs. Microsoft Defender:
You wrote in a sibling reply:
I agree that this has little to do with “regulatory failure” and don’t know / don’t have an opinion on whether that’s what the original tweet author was actually trying to communicate. But my point is that firms absolutely do make purchasing decisions about security software for compliance reasons, and a selling point of CrowdStrike (and Carbon Black, and SentinelOne) is that they make 100% compliance easier to achieve and demonstrate vs. alternative solutions. That’s not a regulatory failure or even necessarily problematic, but it does result in somewhat different outcomes compared to a decision process of “unthinkingly going with the name brand option” or “carefully evaluate and consider only which solutions provide the best actual security vs. which are theater”.
The screenshotted tweet says that you’re required to install something like Crowdstrike, which is correct and also seems consistent with the ChatGPT dialogue you linked?
There are long lists of computer security practices and procedures needed to pass an audit for compliance with a standard like ISO27001, PCI DSS, SOC 2, etc. that many firms large and small are subject to (sometimes but not necessarily by law—e.g. companies often need to pass an SOC 2 audit because their customers ask for it).
As you say, none of these standards name specific software or vendors that you have to use in order to satisfy an auditor, but it’s often much less of a headache to use a “best in class” off-the-shelf product (like CrowdStrike) that is marketed specifically as satisfying specific requirements in these standards, vs. trying to cobble together a complete compliance posture using tools or products that were not designed specifically to satisfy those requirements.
A big part of the marketing for a product like CrowdStrike is that it has specific features which precisely and unambiguously satisfy more items in various auditor checklists than competitors.
So “opens up an expensive new chapter of his book” is colorful and somewhat exaggerated, but I wouldn’t describe it as “misinformation”—it’s definitely pointing at something real, which is that a lot of enterprise security software is sold and bought as an exercise in checking off specific checklist items in various kinds of audits, and how easy / convenient / comprehensive a solution makes box-checking is often a bigger selling point than how much actual security it provides, or what the end user experience is actually like.
There is no such thing as directionally correct. The tweet says “If you use Crowdstrike, your auditor checks a single line and moves on. If you use anything else, your auditor opens up an expensive new Chapter of his book.” This is literally and unambiguously false. No security/compliance standard—public or private—requires additional labor or verification procedures for non-Crowdstrike EDR alternatives. The steps to pass an audit are the same no matter what solution you’re using for EDR. Further, as a vendor, there is a very low ceiling for supporting most of the evidence collection required for any of the standards you cite, even at large scale. Allowing your users to collect such evidence (often, just screenshotting a stats page) is not nearly the largest barrier to entry for new incumbents in Crowdstrike’s space.
Further, the larger implication of the above tweet is that companies use Crowdstrike because of regulatory failure, and this is also simply untrue. There are lots of reasons people sort of unthinkingly go with the name brand option in security, but that’s a normal enterprise software thing and not anything specific to compliance.
I also think that a more insidious problem with Twitter than misinfo is the way it teaches you to think. There are certain kinds of arguments people make and positions people hold which very clearly are there because of Twitter (though not necessarily becuase they read them on Twitter). They are usually sub-par, simple-minded, and very vibes (read: not evidence) based. A common example here is the “we’re so back” sort of hype-talk.
I will add that this problem is the most good faith version of the complaints with “woke” media/fiction (the bad faith one being of course people who simply don’t like any progressive ideas at all, no matter how they’re packaged). Writers and creatives in general spend a lot of time in the Twitter/X bubble and learn these patterns, then have their characters speak the same way, or follow the same logic. To anyone who isn’t deeply embedded in the same bubble and looking for validation, this ends up feeling extremely stilted and unnatural at best, and downright deranged at worst.
I’ll avoid specific examples to reduce the risk of derailing the thread, but I would define “woke” as “prioritizing waging identity-group conflict above other values”. A piece of fiction has many dimensions on which it could be good or bad: novelty, consistency, believability, immersion, likability of characters, depth of characters, emotional range from plot events, predictability of plot, humor, and many more. A woke piece of fiction would be one in which it’s clear that many decisions have been made by a woke ethos, which considers it a good tradeoff to make significant sacrifices on those dimensions in order to advance its preferred identity-group conflict(s); the more woke, the more extreme those sacrifices.
I think there are many issues with the term in how it’s used to mean very different things, but what I’m referring to specifically is a stylistic trait—there are certain kinds of language, of tone etc that tend to be more common in art produced clearly with that specific slant, that are not necessarily perceived as trade-offs rather than just “this is how my tribe talks”. And that’s the one thing where I say there’s a huge disconnect, because it’s often just specifically the language of Twitter politics; the same terminology, the same arguments (with the same flaws, if that’s the case), and the same snappy, performative tone going for the “own” over anything else.
This seems like a natural consequence of a tight optimization loop for public engagement.
From my limited experience following AI events, agreed. Whole storms of nonsense can be generated by some random accounts posting completely non-credible claims, some people unthinkingly amplifying those, then other people seeing that they are being amplified, thinking it means there’s something to them, amplifying them further, etc.
In my experience, if I look at the Twitter account of someone I respect, there’s a 70–90% chance that Twitter turns them into a sort of Mr. Hyde self who’s angrier, less thoughtful, and generally much worse epistemically. I’ve noticed this tendency in myself as well; historically I tried pretty hard to avoid writing bad tweets, and avoid reading low-quality Twitter accounts, but I don’t think I succeeded, and recently I gave up and just blocked Twitter using LeechBlock.
I’m sad about this because I think Twitter could be really good, and there’s a lot of good stuff on it, but there’s too much bad stuff.
I follow a hardline no-Twitter policy. I don’t visit Twitter at all, and if a post somewhere else has a screenshot of a tweet I’ll scroll past without reading it. There are some writers like Zvi whom I’ve stopped reading because their writing is too heavily influenced by Twitter and quotes too many tweets.
Feel free to pass on this, but I would be interested in hearing about what obvious misinformation I’ve boosted if the spirit moves you to look.
Just about the sanest thing I was able to do while under Twitter’s influence was quitting it. The thing is a memetic infohazard. It is a Keter-class SCP. It is a window through which Eldritch gods scream into the void left after they ate a person’s soul, and its allure draws all to their own end in madness and despair.
I use twitter a lot (maybe 45 minutes a day on average), and I don’t think I do that. I don’t think I boost misinformation. Unless replying to people who spread misinformation to argue with them counts.
https://x.com/williawa
Feel free to look through and prove me wrong. I think you might be able to find tweets I’ll feel somewhat bad about if you post here, but I think they’d be me calling someone an idiot or something, not me spreading misinformation.
There are other people who I think this applies to too. Like @Isaac King or isaac king I think are very active twitter users who are reasonable. Even Eliezer who you seem to be pointing to as an example of someone who is negatively affected by twitter, I think is not really very bad. I think his twitter conduct is less than literally perfect, but I can’t remember him boosting misinformation in a clear cut way.
Or, I can remember a few instances spreading what I’d call “misinformation”, like saying, without caveats, that “saturated fat is healthier than polyunsaturated fats” (paraphrase it might’ve been just unsaturated fats), but I think he sincerely believes that, and not because of twitter, so its not an example of what you’re talking about.
FWIW I would agree that Twitter is probably at least slightly bad for almost everyone. Those who are reasonable on Twitter are probably only so because they’re even more reasonable in other fora.
Edit: Bad in the particular way being discussed. It can be good in other ways, like learning new information about the world.
I think there’s some about of misinformation or wrong facts that you’ll believe when you read enough things. Maybe twitter users who use it for news etc. end up with a higher % of incorrect views about the world, but I think anyone who reads the news regularly if only just from reputable sources (ie. print) will have weird beliefs
This framing underplays the degree to which the site is designed to produce misleading propaganda. The primary content creators are people who literally do that as a full time job.
Like, I’ll show you a common pattern of how it happens. It’s an extremely unfortunate example because a person involved has just died, but it’s the first one I found, and I feel like it’s representative of how political discourse happens on the platform:
First I’ll explain what’s actually misleading about this so I can make my broader point. The quote tweeted account, “Right Angle News Network”, reports that “The official black lives matter account has posted a video stating that black people ‘have a right to violence’ amid… the slaying of Iryna Zarutska”. The tweet is designed so that, while technically correct, it appears to be saying the video is about Iryna’s murder. But actually:
The video the account posted is taken from a movie made forty years ago.
The account doesn’t reference the murder at all. The only connection that the post has to the murder is that it was made a few days after it happened, which I guess means that it was posted “amid” the murder.
As is typical, the agitator’s tweet (which was carefully designed not to be an explicit lie), is then “quote tweeted” and rephrased by a larger account, who attempts to package the message for more virality. In this case the person just says “Official Black Lives Matter account justifying the murder of Iryna Zarutska”. But that’s not actually established at all! The quote tweeter is just reading a certainty into a tweet that was deliberately engineered to be misread.
This pattern happens everywhere, for every socially charged topic, on every side. “Your enemies are saying X horrible shit” is possibly the most common form of slander on Twitter. It happens especially often when people are posting about stuff that happens on other platforms, because there it’s extremely easy to lack context or mislead people about what’s going on.
Possibly, but it’s probably also simply true most of the time. Usually, you can simply quote tweet them (or post screenshots) saying the thing you’re accusing them of saying. Sure, sometimes, it’s missing relevant context, but that’s relatively rare: normally, your enemies really are saying the horrible things.