Debunking the AI food delivery hoax that fooled Reddit
A “whistleblower” tried to corroborate his viral post with AI-generated evidence. This is how I caught him. PLUS: Grok's image-generation crisis, and the rapture over Claude Opus 4.5
I.
AI tools often make my job easier. Today, though, I want to talk about a way that they’re making it harder.
Over the weekend, like thousands of other people browsing Reddit, I stumbled on a post that alleged significant fraud at an unnamed food delivery app. The post, written by a fresh account named Trowaway_whistleblow, purported to be from a software engineer preparing to leave the company. It detailed various ways that the company rigged the platform against customers and delivery drivers: slowing down standard deliveries to make priority orders look artificially faster, for example, and charging a “regulatory response fee” that the company uses to lobby against driver unions.
Perhaps the most jarring accusation in the post, which the whistleblower cited as his main reason for quitting, was that the platform calculates a “desperation score” for its drivers based on when and how often they accept deliveries. The whistleblower wrote:
“If a driver usually logs on at 10 PM and accepts every garbage $3 order instantly without hesitation, the algo tags them as ‘High Desperation.’ Once they are tagged, the system then deliberately stops showing them high-paying orders. The logic is: ‘Why pay this guy $15 for a run when we know he’s desperate enough to do it for $6?’ We save the good tips for the ‘casual’ drivers to hook them in and gamify their experience, while the full-timers get grinded into dust.”
The post seemed to confirm our worst fears about the platforms we rely on: that they are rigged against us, ruthlessly exploitative, and always testing the boundary of what is considered legal. It also drew on a long and very real history of regulatory arbitrage and exploitation by delivery platforms, from DoorDash’s theft of driver tips to Uber’s secret system for evading law enforcement.
The post would eventually garner 86,000 upvotes, hitting Reddit’s front page and likely being viewed by millions. Users gave the whistleblower more than 1,000 pieces of Reddit gold, which can be used to buy premium features. A screenshot of the post on X generated more than 36 million views.
And to me — a reporter who covers platforms, about to return from vacation, eager to get a scoop in the New Year — the whistleblower’s post was hugely compelling. This was the exact sort of story Platformer was built to cover. Now I just had to verify that what I was reading was real.
I dashed off a quick message to the whistleblower through Reddit, identified myself as a journalist, and gave him my Signal. Nine minutes later, he sent me a message. (His username: “Whistleblower.”) “Hi,” he wrote. “I saw that you contacted me on my reddit post.”
Over the next half hour, we chatted a bit about his experience. Like most people who leak information like this, his paramount concern was to remain anonymous. While he told me he wanted to share more with me, he also said that “most of the other news agencies” that had contacted him “required far more personal informatin than I am willing to risk.” (You’ll note that his misspelled “information” there; one of the red flags in our conversation was that he made frequent spelling and usage errors that he had not made in his original post.)
I told him that I would do my best to keep him anonymous, but that I needed to verify his identity. “Would an employee tag with my name blurred off serve?” he asked. I told him to send it along. He sent a photo of what appeared to be an employee badge for Uber Eats:

It looked plausible enough, though I would soon learn it had been generated by Google Gemini. I asked the whistleblower if he had any other materials that would back up his allegations. He told me he was afraid of getting caught. A few minutes later, though, he agreed. “I will see what I can provide for you,” he wrote.
He didn’t message me again until the following morning. “I found some documents that would corroborate my claims,” he wrote to me on Sunday. Attached was a report titled “AllocNet-T: High-Dimensional Temporal Supply State Modeling.” I opened it up and began reading. (I’ve posted the full document here so you can see the state of the art in efforts to trick reporters.)
The 18-page document is among the strangest things ever sent to me by a source. It presented itself as the product of Uber’s “Marketplace Dynamics Group, Behavioral Economics Division,” and was dated October 14, 2024. Each page was watermarked “Confidential.”
The bulk of the document appears to describe a technical architecture for the AI system behind the “desperation score” the whistleblower alleged in his original post. By the end, though, it had also offered support for each of the other claims in the post, even when they had no obvious connection to the score. For example, it describes “automated ‘Greyballing’ protocols for regulatory evasion” — an apparent reference to Uber’s old Greyball tool for hiding itself from regulators. It’s not clear why a technical paper on system architecture would also include an extended section on regulatory affairs.
I wish I could tell you that I immediately clocked the document as a fake. The truth is that it initially fooled me. Laden with charts, diagrams, and mathematical formulas, the document closely resembled many AI-related papers that I have read (and perhaps half-understood) over the past few years. I lacked the technical knowledge to discern that, as plausible as it may have looked in some places, the document was nonsense.
And the longer I read, the more outrageous the document got. It ends by saying the company is exploring the use of Apple Watch data to identify drivers in states of distress, who may be willing to accept worse payouts. It also proposes listening to them via their phone’s microphones to “detect ambient vehicle noise (crying, arguments) to infer emotional state and adjust offer pricing accordingly.” In my state of wishful thinking, it looked like a smoking gun.
The whistleblower, for his part, worked to amp up the pressure. He told me he had shared the document with other reporters, putting me into a competitive crunch. He asked when I thought I would publish. I asked him if he could point me to any current or former coworkers of his who could help me understand the document better. “Not really,” he said.
By this point, alarm bells were starting to ring. I wondered if the employee badge the whistleblower had shared with me might have been AI-generated. While AI systems are notoriously unreliable at identifying their own outputs, Google Gemini can detect SynthID watermarks embedded in images that it produces. I uploaded the badge to Gemini and asked if Gemini had made it. “Most or all of this image was edited or generated with Google AI,” it said.
I confronted the whistleblower and said I would need to know his name and see a LinkedIn profile before we continued. “Thats ok. Bye,” he wrote. A few hours later, he deleted his Signal account.
II.
By this point, lots of other people who had read the original post had begun to raise doubts. “Library wifi? Do you guys know any software engineers who use ‘library wifi’ for opsec reasons?” sensibly asked Nabeel Qureshi. Skeptics also began to comment on Reddit, noting among other things that the original post was written in much better English than the replies the whistleblower was leaving on comments.
I shared the document he had sent me with a former ridesharing company engineer I know, and he pointed out various mistakes the whistleblower had made. Companies just don’t talk like this, or work this way, he explained. They run experiments and describe their findings in narrowly focused documents. They do not outline sinister plans for human exploitation and regulatory evasion in writing.
For reporters, fabricated leaks like these are a hazard of the job. I have been on alert for shenanigans like this since 2004, when Dan Rather did a segment on 60 Minutes involving allegations about President George W. Bush’s time in the Texas Air National Guard that turned out to be based on forged documents. More recently, Russian military intelligence leaked phony emails related to the campaign of French President Emmanuel Macron two days before the final vote in the 2017 French presidential election. (The leak included both authentic and phony emails, making the fakes seem more authentic.)
“On the other hand, LLMs are weapons of mass fabrication,” said Alexios Mantzarlis, co-author of the Indicator, a newsletter about digital deception. “Fabulists can now bog down reporters with evidence credible enough that it warrants review at a scale not possible before. The time you spent engaging with this made up story is time you did not spend on real leads. I have no idea of the motive of the poster — my assumption is it was just a prank — but distracting and bogging down media with bogus leads is also a tactic of Russian influence operations (see Operation Overload).”
For most of my career up until this point, the document shared with me by the whistleblower would have seemed highly credible in large part because it would have taken so long to put together. Who would take the time to put together a detailed, 18-page technical document about market dynamics just to troll a reporter? Who would go to the trouble of creating a fake badge?
Today, though, the report can be generated within minutes, and the badge within seconds. And while no good reporter would ever have published a story based on a single document and an unknown source, plenty would take the time to investigate the document’s contents and see whether human sources would back it up.
I’d love to tell you that, having had this experience, I’ll be less likely to fall for a similar ruse in the future. The truth is that, given how quickly AI systems are improving, I’m becoming more worried. The “infocalypse” that scholars like Aviv Ovadya were warning about in 2017 looks increasingly more plausible. That future was worrisome enough when it was a looming cloud on the horizon. It feels differently now that real people are messaging it to me over Signal.
If there’s anything that gives me comfort here, it’s that old journalism-school maxims can still help us see through the scams. If it seems too good to be true, it probably is. If your mother says she loves you, check it out. Always get a second source. And one more from the social media age: you should always be at your most suspicious online when someone is baiting you into outrage.
But all of that takes time, effort, and cognitive hygiene. And the rapid spread of the whistleblower’s post illustrated yet another maxim they taught us in J-school: A lie can travel halfway around the world before the truth can get its boots on. With AI tools at their fingertips, hoaxsters can make those lies travel even faster.
More debunking: Uber Eats and DoorDash have both denied that the post is about them, or that they do anything described in the post.

Sponsored
"I trust Copilot Money to stay on top of my finances." - Casey Newton, Platformer

Getting a clear picture of your finances shouldn’t require jumping between bank apps and spreadsheets. Copilot Money brings all your accounts, budgets, and investments into one organized dashboard, and seamlessly syncs across iPhone, iPad, Mac, and the web.
With AI-powered categorization, real-time updates, and an Apple Design Award nomination, Copilot Money gives you the cleanest, clearest view of your money. It’s part of why the app holds a 4.8 rating from more than 25,000 reviews.
Copilot Money helps you:
● See your spending clearly, without manual tracking
● Understand trends across accounts
● Catch unusual charges automatically
● Stay organized across all your devices
Platformer readers get 26% off your first year + 2 free months with code PLATFORMER only at the below link.

Following
Grok's image-generation crisis
What happened: Over the weekend, nonconsensual sexualized images of women and minors flooded X after users discovered they can successfully prompt Grok to depict real people in underwear and bikinis. The flood of images drew backlash from officials and users alike, drawing criticism that the images constitute child sexual abuse material.
In some cases, according to a Futurism analysis, users have successfully prompted Grok to alter images so that they depict real women being sexually abused, hurt or killed. Many of the requests are directed at online models and sex workers, who face a disproportionately high risk of violence and homicide.
Musk quoted a post saying “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content” and said he was “not kidding” — though what the consequences are exactly is unclear. He has since gone back to posting about Grok’s image generation capabilities.
Why we’re following: This is, of course, not the first scandal related to Grok's lax content moderation. (Perhaps you remember MechaHitler.)
But the app's “spicy” mode, introduced in August, drew fresh scrutiny over the weekend and left many users concerned over how easily nonconsensual porn can be generated, posted, and spread all over social media. And while regulators have given Elon Musk a long leash to date, backlash over the Grok CSAM controversy was quick — and global.
What people are saying: Officials around the world are condemning the images. “This is not spicy. This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe,” European Commission spokesperson Thomas Regnier said.
The UK’s Ofcom said it was in touch with X and xAI to understand what steps they have taken to comply with their legal duties to protect users.” France told Politico it would investigate the proliferation of sexually explicit deepfakes on X.
India ordered X to make immediate changes to restrict the “obscene” content.
xAI did not respond to requests for comment from multiple news outlets. “Legacy Media Lies,” X told Reuters. Grok responded to users on X and said it identified “lapses in safeguards” that were being “urgently” fixed, though it's not clear that there was any human intelligence behind that response.
“How difficult is it to code a line which prevents AI to not accept clothes changing commands?” @baracenler wrote on X in a post that garnered 1.2 million views.
—Lindsey Choo
The rapture over Claude Opus 4.5
(See ethics disclosure!)
What happened: AI researchers, software developers, economists, and journalists spent the past couple weeks vibe coding in Claude Code using Anthropic’s newest model, Claude Opus 4.5. And they're largely thrilled with the results.
“ive done more personal coding projects over christmas break than i have in the last 10 years,” said Midjourney founder David Holz. He wasn’t alone. Politics blogger Matt Yglesias “asked Claude Code to scrape location data and create a map showing the Dunkin-Starbucks ratio in every county.” Economist Alex Imas used it for data analysis, finding that it got “24-48 hours of work done in 20 minutes.” Developer Martin DeVido connected Claude Code to watering, temperature, and humidity controls for his plant and tasked it with keeping the plant alive—Claude just spotted it flowering.
Elsewhere, AI researcher Andrej Karpathy used it to start building a “home automation master command center” that controls his lights, HVAC, and motion sensors.
Many vibe coders report being stunned by what Claude Code can do, resulting in perhaps the most gushing we've seen about an AI model on X to date — along with significant anxiety that Claude Code will soon make people’s skills obsolete.
Why we’re following: Opus 4.5 was released in November, but usage of the model in Claude Code seems to have exploded over the past month. Even though vibe coding has been a popular nerd activity for some time now, Opus 4.5 seems to have given it new life.
On Dec. 27, Claude Code creator Boris Cherny said that in the past month, 100 percent of his own code had been written by Anthropic's coding agent. That fueled fresh attention to the possibility that software engineering — and who knows what other professions — will be increasingly vulnerable to automation.
What people are saying: “I’m not joking and this isn’t funny,” senior Gemini engineer Jaana Dogan began a popular X post, sharing that Claude Code built a “toy version” of an agent orchestrator her team’s been working on “since last year” — “in an hour.”
“This industry has never been a zero-sum game, so it’s easy to give credit where it’s due,” Dogan added. “Claude Code is impressive work.”
On X, Karpathy said, “I've never felt this much behind as a programmer.”
“Clearly some powerful alien tool was handed around,” Karpathy said, “except it comes with no manual.”
Gradescope co-founder Sergey Karayev wrote, “Claude Code with Opus 4.5 is a watershed moment, moving software creation from an artisanal, craftsman activity to a true industrial process.”
Karpathy replied with a photo of a carpenter hand-sanding a bench. “How I suddenly feel about all of the code I've written so far.”
—Ella Markianos

Side Quests
President Trump’s super PAC raised $102 million in the second half of 2025, led by contributions from OpenAI cofounder Greg Brockman, Crypto.com parent Foris DAX, and private equity investor Konstantin Sokolov.
A roundup of the new tech laws taking effect this year.
Disinformation about Venezuelan president Nicolas Maduro flooded social media within minutes of his capture. Prediction market traders turned a huge profit by betting on Maduro’s capture. Starlink is now offering free internet access to users in Venezuela through Feb. 3.
The US Virgin Islands sued Meta and accused it of knowingly profiting from scam ads. How Reels, once a TikTok copycat, became a hit with users. Yann LeCun on why he stepped down from Meta, having Alexandr Wang as a boss, and the limits of LLMs.
Influencers and OnlyFans models are dominating the O-1 “extraordinary” visa category. The Pennsylvania Supreme Court ruled that police didn’t need a warrant to get a convicted rapist’s Google searches in an investigation. California launched DROP, a free tool that forces registered data brokers to delete residents' personal data upon request. A look at Alaska’s tumultuous yearlong journey to build an AI chatbot for its court system.
How AI has made drone warfare in Ukraine more deadly.
OpenAI is reportedly revamping its audio AI models for an upcoming device. Why OpenAI still has a long way to go to effectively compete with the App Store. More than 40 million people use ChatGPT for health information, an OpenAI report said.
Gemini on Google TV is getting Nano Banana and Veo support.
xAI launched Grok Business and Enterprise on the same day its nudification scandal emerged. A look inside Tesla’s Optimus humanoid robot project that still relies on humans.
Amazon’s Alexa Plus website is now available.
Reddit overtook TikTok as the fourth-most visited site in the UK. The EU is reportedly switching its focus to enforcing tougher tech regulations this year.
Twitter cofounder Biz Stone and Pinterest cofounder Evan Sharp launched an app, Tangle, which they say is a “new kind of social network, designed for intentional living.”
A profile of Max Tegmark, an MIT physics professor and AI safety campaigner who is making appeals to Elon Musk and the Pope.
Scammers are using AI deepfakes to impersonate pastors asking communities for donations. Why experts think AI could be changing the dating world for the worse. Experts warn that AI tools can erode learning as schools increasingly roll out chatbots.

Those good posts
For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us
Send us tips, comments, questions, and hoaxes: casey@platformer.news. Read our ethics policy here.