Meta’s estimates of money they earn due to fraudulent ads was $16 billion in 2024. In contrast total value of property stolen in burglaries in the US is around $3 billion per year. Unfortunately, 2019 is the last year with good numbers but should still be around that amount.
If Meta takes around half of the money that the fraudsters make as their cut, that would suggest Meta facilitated fraud that suggests that Meta helped steal around $30 billion via fraud in year or 10x the amount that is stolen in the year via burglaries.
To be fair, Meta facilitates stealing from people all over the world via fraud and not just stealing from the US, but how high would the toll of Meta facilitated fraud need to be to consider Meta to be a organized criminal enterprise?
Is causing 10x the amount stolen per year via burglaries not enough and it would need to be 20x for people to see Meta as a criminal enterprise?
[Question] Should we consider Meta to be a criminal enterprise?
It seems to me that the relevant factor that makes a platform a “criminal enterprise” is not the absolute amount of crime that it enables, but the percentage of the activity on that that platform that is criminal in some way.
If 50% of meta’s revenue comes from crime, then I’m more-or-less comfortable saying it’s a criminal organization. If 0.01% of it’s revenue comes from crime, but that happens to be a large total amount, I role my eyes at accusations that they’re a criminal organization.
A quick search seems to indicate that their revenue in 2019 was about 70 billion dollars. So if I take your 16 billion figure at face value, about 23% of their revenue comes from enabling fraud (or did in 2019).
This is higher than I was expecting, and high enough that I would be inclined to majorly hold the company accountable. It’s a matter of taste whether or not it’s high enough to declare the company “a criminal enterprise.”Maybe Google or Amazon should employ burglars—not only do they have a higher legitimate revenue than Facebook, they have other advantages. Google has a better reputation than Facebook allowing them to get a higher percentage of revenue from crime before it becomes a PR issue. Amazon already has an extensive logistics network they could use for getaway vehicles.
The 16 billion number comes from 2024 (and is estimated by Meta themselves), the burglary estimate comes from 2019. Unfortunately, numbers from the same year weren’t available for both. In 2024 Meta’s revenue was $160 and their operating income $62 billion, so we are talking about 10% of their revenue or 25% of their operating income.
The $16 billion comes from 2024, so it’s closer to 10% of their revenue.
Along similar lines, should we consider Sam Altman, Dario Amodei, etc. to be more evil than Hitler, in terms of the expected number of people they will murder?
Murder is about intent. I think Dario believes that his actions reduce the chance of human extinction due to AI because Anthropic is doing a better job then competitors.
When it comes to Sam Altman, I don’t think he believes that OpenAI is likely going to kill humanity.
Facebook on the other hand, is intentionally and knowingly facilitating fraud because they think that the government is unlikely to punish them for it and try to make as much money as they think they can get away with without the government punishing them.
is intentionally and knowingly facilitating fraud
Do we actually have proof that it is intentional?
If you argue that the likely fine you have to pay is lower then the profit you are making and thus you don’t need to engage in strong measures to reduce fraud, I do see that as sign of intent.
When Meta shows it’s users an ad that it believes to with 90% probability from a scammer, it should at least tell the user about the ad likely being a scam. Withholding that information when especially older users probably thinks that Meta goes through some effort about not just presenting the user with scams seems clearly intentional as it would be easy to show the user a warning that Meta thinks that the ad is more likely than not a scam.
The expected number strongly depends on one’s model of the world. (It might well be negative, depending on one’s “P(doom)” (a hand-wave for a more correct consideration) and taking into account the chances to address the 100% mortality rate we observe for humans today.)
The real questions for that situation are:
-
How does one handle high variance situations with very high risks and very high rewards, regardless of expectation values (which we are not certain about)?
-
How does this depend on the degree of centralization for the decision making (especially when disagreements are sharp and there is no trend towards broad consensus)?
No, it’s not possible for it to be negative. You’re not allowed to murder people even if you save an equal or greater number. If you invented a machine that had a 49% chance of killing me and a 51% chance of making me immortal, and you pointed it at me without permission, you would be committing a heinous crime and I’d be perfectly justified in self-defense. AI CEOs are doing the same thing at a much larger scale.
Well, observe that vaccinations have non-zero mortality and that they are often given to people who can’t meaningfully consent. (Actually, this is applicable to many childhood medical interventions; meanwhile the society does not differentiate between the right to life for children and for adults.)
Many other decisions have environmental trade-offs and other safety trade-offs which can have mortality implications, and they are taken without unanimous consent.
So, while your position is a possible position one can take, the current practices of human societies are not in agreement with that position, they are more nuanced.
PS. Since you referenced WWII, obviously the allies did not take the position that they were under obligation to fully refrain from inflicting civilian deaths either, to say the least.
-
I think we should have fairly strict standards for calling something a “criminal enterprise”, otherwise any sufficiently scaled business will inevitably be one. Verizon surely knows that their cell service is used to carry out crime. Same with Comcast and their internet service. Likewise for Bank of America and their financial services. I agree with Patrick McKenzie that the optimal amount of fraud is non-zero. If you make a service that is generally useful than it is often also going to be useful for commiting crimes. Actions you take to limit crimes inevitably also impact legitmate users as well and disrupt their ability to use the service legally.
I think the general thought on cases like that of Meta is that these companies should be doing more to prevent crime on their platforms and their inaction should be taken to suggest tacit acceptance of the crime that is occuring. But if you accept that the optimal amount of crime is non-zero, then it is a genuinely difficult question what policies and procedures are good and which ones cause more harm than good. As a result, we shouldn’t take the fact that a large amount of crime occurs on a platform as ipso facto making the platform a criminal enterprise.
Saying the optimal amount of fraud is nonzero is a way to avoid the question of what amount of fraud is reasonable. Is the optimal amount of fraud that Facebook facilitates really a multiple of the value stolen through burglaries?
With $62 billion in net income, the $16 billion made from crime is a quarter of their net income or a tenth of their revenue.
While this might be true for some companies, if you mail around lists of the biggest fraudsters on your platform within your company and don’t do anything to stop them, you aren’t at that point. Banning people that you internally call the biggest fraudsters that make millions does very little to impact legitimate users.
I think the phrase is meant to suggest the need to look deeper into the tradeoffs and understand what exactly you’d like an actor to do more or less of rather than go off of high-level impressions.
I think the discussions around the numbers you quote illustrates this point. The $16 billion dollars comes from multiplying by a 10% number from Meta’s internal reports. This becomes 23% in one of the comments and 25% of net income in yours. I think it is valid and useful to discuss these types of numbers when trying to understand the phenomenon but it can be a bit risky when they are used too much for “vibes”. I feel like you are essentially going off vibes, and this is what the “optimal amount of fraud is non-zero” is meant to push against. Basically we shouldn’t have a knee jerk reaction to 10% of revenue or 25% of net income or whatever just because they sound bad on first glance. For example, Meta argues that the 10% of revenue is actually an overestimation:
Is this reasonable or a greedy corporation trying to cover its ass? I feel like that way leads to vibes. People are going to react to that largely on how the feel about Meta or big tech or whatever.
Its unclear to me that Meta failed to ban any accounts they thought were obvious scammers and were still active on a given account. Can you quote a section of the article that leads you to believe this? It seems like Meta employees did discuss high risk ads, but I could imagine ways this is more complicated then vibes make it appear. For example, the report being cited seems to cover a period back to 2021. You could have examples of scams on the platform that are being discussed but that the scammers have since moved on from and are no longer active. I think this type of thing should be considered par for the course when dealing with criminal activity since criminals should be expected to do what they can to adapt and avoid attempts to ban or otherwise interfere with their operations. Thus moving quickly to the newest most effective scam is somewhat expected behavior.
If we look in detail at some of the potential actions Meta could take I think it becomes clear that concern for impact legitimate users would be reasonable:
What is the potential downside of banning accounts on a lower threshold? The downside is that you potentially trade false negatives for false positives. In other words, you end up banning more legit advertisers by doing this.
The is critical of the fact that Meta employees discussed the issue of identifying scams as a business decision. One of the comments quoted:
This seems consisent with the concern about impacting legit users. People may differ in how much they believe this is the real motivation, but I think that gets us back into “vibes” territory.
Comparing the money made by meta to the amount of value stolen via burglaries is not a vibe based argument.
The right action for ads that are more likely than not fraudulent is to put them in a queue to be reviewed by human moderators and probably tell the police about fraud attempts that human moderators consider to be relatively certain to be fraud.
When ten percent of their revenue is facilitating fraud than getting rid of those ten percent of their revenue hits “specific revenue guardrails” even when it doesn’t impact legitimate users at all. It’s quite obvious that removing 25% of the profits would result in some revenue guardrails being violated.
Increasing the price for fraudulent ads is a way to keep revenue high while reducing the amount of fraud.
If Meta would believe that the correct number is substantially lower in a way that would motivate people to be less angry, they would probably have shared with it, so what we take from that statement is that Meta believes that the correct number is so high that it’s embarrassing to them.
Generally, do you believe that if corporate accountants have the job to estimate a number that’s bad for the corporation and could possibly surface in lawsuits or government investigations are they more likely to over- or underestimate it?
I would also note that the corporate statement includes any sign of Meta investing resources. It does not say things like “Because we care about our users not getting scammed we spent 100 million on investigators to remove fraud from our platform.”
I think it is, why are we comparing burglaries to digital crimes when the latter is likely far more common?
And the ads are not only fraud as the post alleges. It’s fraud and banned goods. The sale of the latter isn’t stringently prosecuted since in most cases it’s a victimless crime. It is quite easy to buy drugs illegally on the internet.
Because Meta shares a huge responsibility for making the digital crimes easy to do. According to their own analysts their platforms are third of involved in a third of all successful scams in the U.S.
This isn’t just about ads but also about other communication, but it should be Meta’s responsibility to provide an environment for their users that doesn’t make them prime targets for crime.
Digital crime proliferation is a sign of big tech failing customers by not adequately protecting them.
Meta’s revenue in 2024 was 160 billion, for about 8% coming from proceeds of crime. Verizon’s revenue in 2024 was $134 billion. I would be extremely shocked if 12 billion dollars of that was directly proceeds from crimes- as an extremely loose pseudo-bound, if every burglary in the US was revenue straight to verizon that’s only a bucket of 3 billion dollars.
I would say that a lot of the judgement hinges more on intent,
I’m assuming that this data comes from Meta’s own research. Do they glibly state that this is going on and that they have no intention to do anything about it? Or are they intending to tackle the issue as much as possible? I don’t know that it’s fair to compare them with telephone companies etc. Verizon and Comcast well be aware that criminal activity happens using their services, but could they tell you what percentage? Or exactly how much they will make from these enterprises? Do they forecast their revenue from scams?
On the making a service that is generally useful, I would say that telephone companies and ISP’s are useful. I don’t know that the same could be said of Meta, from all accounts they invest a great deal of resources on figuring out how to manipulate people into using their platforms as much as possible. There is also research that says social media is highly addictive and detrimental to mental health. I would be tempted to reclassify this useful service as an addictive medium with the potential to cause harm to a great number of people, with some mildly useful aspects such as messaging, surface level connection, community communication etc.
My gut says that they’re not exactly a ‘criminal enterprise’ as they were not set up for this purpose. However they do seem to be wandering a long way from their original intention and too close to negligence that can affect a great number of people.
No. “criminal enterprise” is not a useful designator for this. Their primary business is not criminal, and even as an ancillary business, it’s so deeply commingled with legit advertising that it’s very hard to prosecute them as even negligently supporting those crimes.
Also, your local pizza joint likely takes money that was criminally acquired. They are also not a good target to designate as a criminal enterprise.
If you want to be a bit more nuanced and debate whether we should devise and impose some “know your customer” type rules on advertising brokers (or pizza parlours), that could be interesting.
If you have some mafia element that runs a casino as their main profit source that technically legal and makes 90% of their profit through crime, would you say that’s a criminal enterprise?
The pizza joint does not provide a way to target specific demographics who are likely vulnerable to scam attempts and run complex machine learning algorithms that optimize the effectiveness of the scam and auction of the marks to different scammers.
I mean, you’ve embedded the answer in the question, by definition of “mafia”, I think.
They might, if they’re in San Francisco and signed up for the SaaS version of business optimization.
What scenario are you imagining where the pizza joint facilities fraud and people get scammed because of the pizza joint that otherwise wouldn’t be reached by the scam?