Meta has a scam problem
The company's own projections showed up to 10 percent of its revenue could come from scams, according to a new leak. PLUS: OpenAI's "backstop" controversy
On Thursday, Reuters’ Jeff Horwitz reported that Meta projected that 10 percent of its annual revenue in 2024 — roughly $16 billion — would come from ads for scams and banned goods. Today, let’s talk about the alarming details — and the frustrating stalemate that has emerged this year between the company and its critics.
First, read Horwitz's investigation, which is chockablock with eye-popping details about the scope of fraud on Facebook, Instagram, and WhatsApp and Meta's surprisingly cautious efforts to eliminate it. Drawing on internal documents and the company's own projections, Horwitz establishes that scams on the platform are a much larger source of revenue than is widely known.
He writes:
The documents indicate that Meta’s own research suggests its products have become a pillar of the global fraud economy. A May 2025 presentation by its safety staff estimated that the company’s platforms were involved in a third of all successful scams in the U.S. Meta also acknowledged in other internal documents that some of its main competitors were doing a better job at weeding out fraud on their platforms. [...]
The insights from the documents come at a time when regulators worldwide are pushing the company to do more to protect its users from online fraud. In the U.S., the Securities and Exchange Commission is investigating Meta for running ads for financial scams, according to the internal documents. In Britain, a regulator last year said it found that Meta’s products were involved in 54% of all payments-related scam losses in 2023, more than double all other social platforms combined.
In a statement that the company shared with Platformer and other outlets, Meta said that fraud accounts for less than 10 percent of revenue, without providing an alternative figure. It also expressed a commitment to further reducing the amount of fraud on its platforms.
"We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it and we don’t want it either," the company told me. "As scam activity becomes more persistent and sophisticated, so do our efforts. Unfortunately, the leaked documents present a selective view that distorts Meta’s approach to fraud and scams by focusing on our efforts to assess the scale of the challenge, not the full range of actions we have taken to address the problem.”
At the same time, Horwitz's investigation highlights just how little the prevalence of scams on the platform threatens the company. The leaked documents state that Meta expects $1 billion in fines over scam ads on the platform. But every six months, the company generates $3.5 billion from high-risk ads, "such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit."
Horwitz further reports that the company limited the degree to which its ad safety team could intervene. "In the first half of 2025, a February document states, the team responsible for vetting questionable advertisers wasn’t allowed to take actions that could cost Meta more than 0.15% of the company’s total revenue," he writes. "That works out to about $135 million out of the $90 billion Meta generated in the first half of 2025."
(Meta told Horwitz that 0.15% is not a hard limit.)
Users can report ads that they believe may be scams. But in 2023, the company found that users had filed about 100,000 valid reports — and Meta "ignored or incorrectly rejected 96% of them."
Any platform with billions of users will attract a commensurate number of scammers eager to rob them. Some amount of fraud takes place on any platform where users transact. We can't easily compare the prevalence of scams on Meta's platforms with others, since no platform publicly reports it.
Still, I found at least some data suggesting that Meta underperforms relative to its peers. Meta told Reuters that it removed 134 million pieces of scam content so far this year. Google, the world's largest ad seller, removed 415 million scam ads in 2024, according to its most recent ad safety report. It also permanently suspended 700,000 advertisers for impersonating public figures, a technique that has been supercharged by AI and can be effective in running scams.
Meanwhile, Meta has been caught repeatedly this year by Indicator running thousands of ads for so-called "nudify" apps, which let users create AI-generated nudes of real women. Indicator's Alexios Mantzarlis has found "nudify" ads running across Meta's platforms, despite the fact that the ad copy openly boasted about creating non-consensual images. ("Upload a photo, erase anyone’s clothes," read some ads found by the outlet.)
How is Meta addressing the issue? The company's public story is that it is trying really hard to bring down the number of scams; its private story, as reported by Horwitz, is that it is bracing to pay a small fine while not doing anything that would reduce revenue too dramatically.
And the company has implemented one novel idea to achieve that goal: what it calls "penalty bids." Placing an ad on Facebook requires that the advertiser win an auction; the company has begun to charge suspected scammers higher prices to win auctions. The idea is to make scamming more expensive in the hopes that it will reduce the overall volume of scams on the platforms; Meta says that testing has since shown a decline in spam reports.
Of course, it's not entirely clear how much we ought to celebrate a decline in overall scam reports, when Meta's own employees say that they are failing to act on tens of thousands of valid scam reports every year.
Horwitz has been on a tear this year; in addition to his investigation of celebrity impersonator chatbots, he also delivered this exposé on Meta chatbot guidelines permitting "sensual" roleplay with children. (Meta subsequently changed its guidelines.) Together, his pieces paint a picture of a company that tolerates an eye-watering failure rate in some of the systems most critical to its users' well-being, making only minor adjustments in response to public pressure.
But at a time when the United States is lurching from one crisis to another, and tech platforms have largely been able to buy off the government, it can feel like nothing sticks. Gone are the days when a confusing data privacy scandal could bring Meta to a halt. Today, a few million dollars is all it takes to turn some of the company's fiercest former critics into full-throated supporters.
Yes, there's still that federal antitrust case against the company. And Meta pays for its failures in other ways, from the extremely high cost of recruiting talent to its AI efforts to its bottom-tier status in public surveys about brand reputation.
All in all, though, it's nothing a $1.56 trillion company can't handle. Someday its executives will be called upon to explain how the company became an unwitting pillar of the scam economy. In the meantime, though, crime pays.

Sponsored
Unknown number calling? It’s not random…

The BBC caught scam call center workers on hidden cameras as they laughed at the people they were tricking.
One worker bragged about making $250k from victims. The disturbing truth?
Scammers don’t pick phone numbers at random. They buy your data from brokers.
Once your data is out there, it’s not just calls. It’s phishing, impersonation, and identity theft.
That’s why we recommend Incogni: They delete your info from the web, monitor and follow up automatically, and continue to erase data as new risks appear.
Black Friday deal: Try Incogni here and get 55% off your subscription with code PLATFORMER

Introducing the 2-minute Platformer Audience Survey
This week, Axios announced the launch of Alltogether, a new collective of independent publishers exploring new ways to build sustainable businesses in journalism. Platformer is proud to be part of the launch, alongside friends like Sources' Alex Heath, Big Technology's Alex Kantrowitz, and Upstarts' Alex Konrad. At first we were worried that the group didn't have enough Alexes, but we've been assured that more will be added in the future.
Anyway: a huge part of this initiative is trying to get to know our audiences a little better. If you have literally two minutes, would you mind filling out our anonymous audience survey? We're just looking for some basic demographic data that will help us find sponsors to expand our journalism. It would mean a lot to me if you helped us out here. I'm even told many readers can fill this survey out in as little as one minute. Are you one of them?? — Casey


On the podcast this week: 1X CEO Bernt Bornich and his company's viral humanoid robot, Neo, join us in the studio. Then, some HatGPT.
Apple | Spotify | Stitcher | Amazon | Google | YouTube

Following
OpenAI’s “federal backstop” firestorm
What happened: OpenAI’s CFO Sarah Friar walked back a statement she made about OpenAI wanting a federal “backstop,” or financial guarantee from the government to shore up OpenAI’s infrastructure funding.
In an interview with reporter Sarah Krouse at the Wall Street Journal’s Tech Live event on Wednesday, Friar said OpenAI was looking for “an ecosystem of banks, private equity, maybe even governmental” support for its AI infrastructure projects.
Krouse attempted to clarify. “Meaning, like, a federal subsidy or something?”
Friar said she meant “the backstop, the guarantee that allows the financing to happen.” That type of guarantee can increase “the amount of debt that you can take,” she added.
Krouse added, “So, some federal backstop for chip investment?”
“Exactly,” Friar said.
But later that day, Friar wrote a LinkedIn post taking it back. “I want to clarify my comments earlier today,” she wrote. “OpenAI is not seeking a government backstop for our infrastructure commitments.”
Friar said that using the word “backstop” was a mistake that “muddied the point.” The full clip showed her point more clearly, she said.
She said her point was that “American strength in technology will come from building real industrial capacity.” This would require “the private sector and government playing their part,” although Friar didn’t specify how.
Why we’re following: OpenAI’s planned infrastructure investments are unprecedented in scope: they’re currently on the hook for over a trillion dollars with some of the world’s biggest tech companies.
Lots of us have been wondering what will happen if OpenAI can’t follow through on those promises. Apparently they have been, too. Although they’re denying the most controversial versions of bailout proposals, they continue to suggest the government should engage in some kind of public-private partnership with OpenAI.
In a conversation last month, economist Tyler Cowen asked OpenAI’s CEO Sam Altman how accidents from his company’s infrastructure would be insured: would they have contracts with the government, like nuclear power plants do?
“At some level, when something gets sufficiently huge, whether or not they are on paper, the federal government is kind of the insurer of last resort,” Altman said.
What people are saying: Friar’s comments immediately got backlash on X. Pseudonymous finance account @RudyHavensein posted: “OpenAI Would Like To Privatize Profits And Socialize Losses - WSJ.”
In the aftermath of the comments, White House AI and Crypto Czar David Sacks posted to X, saying “There will be no federal bailout for AI.”
Sacks caveated that “to give benefit of the doubt, I don’t think anyone was actually asking for a bailout. (That would be ridiculous.)”
Ultimately, it was Friar herself who received a backstop — from her CEO.
“I would like to clarify a few things,” Sam Altman said later on Thursday, beginning a 16-paragraph X post that discusses OpenAI’s revenue financing plans at length.
OpenAI doesn’t “have or want government guarantees for OpenAI data centers,” he said.
OpenAI’s CFO “could have phrased things more clearly,” and his earlier comments to Cowen were “not about bailing out a company.”
Instead, Altman said, he was talking about what governments do when things go “catastrophically wrong.” For example, if an AI — maybe one made by OpenAI —coordinates “a large-scale cyberattack that disrupts critical infrastructure,” he posted.
Investor Parik Patel replied, “I ain’t reading all this.” Both Altman and Patel’s posts have about 6K likes on X.
Cognitive scientist and AI critic Gary Marcus was “not comforted” by Altman’s comments. This is “paragraph after paragraph of spin doctoring,” he said in a quote post on X.
—Ella Markianos

Side Quests
A suite of new right-wing chatbots are turbocharging America’s culture war. How Marc Andreessen’s bet on President Trump is paying off. A new bipartisan bill would require companies to report AI-related job losses, amid new data that show October saw the most job cuts in more than two decades. The FBI is trying to unmask the owner behind archiving site archive.today.
An investigation into how Elon Musk and X amplify right-wing content in the United Kingdom. How Gen Z used social media to topple Nepal’s leader and elect a new one.
Nvidia CEO Jensen Huang warned that China “will win” the AI race ... because they weren't allowed to buy his chips.
Tesla shareholders approved a massive pay package for Musk, which would grant him shares worth nearly $1 trillion — a bid to distract him from his increasing focus on xAI.
OpenAI reached 1 million business customers. A profile of OpenAI “builder-in-chief” Greg Brockman. Nearly half a million users downloaded Sora on Android on its first day.
Google is reportedly in talks to invest more in Anthropic in a round that could value the startup at more than $350 billion.
Google reached a settlement with Epic Games to reform its app store—changes include a reduced fees globally and a new Android program with easier alternative app store registration. It removed 746 million links from popular shadow library Anna’s Archives from its search results. Gemini Deep Research can now directly search your Gmail and Google Chat conversations. Google Finance got new features through Deep Search. YouTube quietly deleted more than 700 videos documenting Israeli human rights violations.
Apple is reportedly planning to pay $1 billion a year to use a Google AI model in Siri.
The Motion Picture Association asked Meta to stop using “PG-13” in its new Instagram age system. The Chan Zuckerberg Initiative restructured into Biohub, which will focus on AI and scientific research.
Microsoft AI chief Mustafa Suleyman lays out his vision for a new superintelligence team. (Suleyman promised, unconvincingly, to keep humans in charge.) Businesses can manipulate customer AI agents into buying products, a new Microsoft test showed.
Surveillance company Flock Safety is facing investigations and lawsuits across the political spectrum.
In earnings: Snap shares climbed 15 percent after it beat analyst expectations in Q3. Match Group shares slid after forecasting Q4 revenue below analyst estimates. Pinterest shares plummeted 21 percent after it reported a hit on tariff-related ad revenue.
How porn company Strike 3 made millions suing its viewers.
Foursquare founder Dennis Crowley debuted his new app, which sends you short audio updates whenever you put in your AirPods.
AI can design new antibodies from scratch and speed up drug development, new research suggests.

Those good posts
For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us
Send us tips, comments, questions, and Instagram scams: casey@platformer.news. Read our ethics policy here.