One of the core subjects of this newsletter is the still-emerging field that tech companies call trust and safety. Most of the time, it’s safety that grabs the headlines: hate speech on Facebook; Apple scanning devices for images of child abuse; COVID misinformation on YouTube. Today I want to talk about the trust side of the equation — specifically how platforms verify profiles. More specifically, why they should verify your profiles, if you want them to. No matter who you are.
Verification might seem like a dry subject, but those little checkmarks mean a lot to people. (You learn this once you get your checkmark, and friends and family immediately start asking how they can get theirs, too.)
They also wind up embarrassing platforms on a regular basis.
In April, I wrote about the case of Amazon’s Twitter army. At the height of Amazon’s labor battle in Bessemer, AL, there was no way to distinguish between workers who actually represented Amazon, and those who were posting satirically. Twitter’s verification program, which it had just spent three years overhauling, had not anticipated a case in which the authenticity of rank-and-file workers would ever be up for scrutiny.
Twitter is halting the expansion of its verification program, saying it needs to work on the application and review process that lets people into the blue check mark club. This change, where Twitter won’t be letting new people apply for verification, is coming after Twitter admitted that several fake accounts, which reportedly seemed to be part of a botnet, were incorrectly verified. […]
This isn’t the first time Twitter has paused its verification program — it put the public process on hold in 2017, after it received backlash for verifying one of the organizers behind the Unite The Right rally in Charlottesville. It brought back a revamped version in 2021 — and paused it a week later due to an avalanche of requests.
Baked into Twitter’s approach is the idea that verification should be rare and precious — reserved for “notable” accounts only. Facebook and Instagram take a similar approach. One of my core beliefs is that reserving verification for “notable” accounts actually reduces trust in networks overall. it reserves special privileges for elites — like customer service — that should be available to all; it confers moral authority on whoever manages to get a checkmark, even if they are one of the worst actors on the network; and, of course, It breeds contempt between regular users and “bluechecks.”
Now, maybe at this point you’re saying: great, thanks Casey. Another intractable platform problem that shall haunt us as long as we live. Not so! For another platform has tackled the problem of user authenticity in a totally different way, and the results have been … pretty great.
The platform is Tinder, Match Group’s popular dating app. In April I wrote about the app’s move last year to let anyone verify their account by sending in a few selfies:
Upon request, Tinder sends the user a picture of a model performing specific poses. Users take selfies in the poses shown and submit them to Tinder; photos are reviewed by its community team. If the user’s poses match the model’s, they get a blue checkmark. The process takes about a day.
Catfishing remains a significant problem on dating apps, so self-serve verification like this addresses an obvious problem. And while a blue checkmark on Tinder doesn’t tell you everything you need to know about a potential date, it dramatically increases the odds that the person you’re talking to at least looks like their picture. The benefits are significant enough that, from what I can tell, the majority of Tinder users in my area have now verified their profiles.
Tinder could have stopped there. But executives noted that their approach to verification was limited in one significant way: a significant number of Tinder users do not display images of themselves for safety reasons — particularly women and LGBTQ+ people outside of the United States. Many of these users could still benefit from verifying that they are authentic human beings looking for romance and friendship, and not bots or scammers. But if they declined to upload photos of themselves, how could they?
This week, Tinder said it is developing a solution to that problem. The company is preparing to release a second, complementary form of verification for users who don’t want to show their face. Instead of verifying a user’s identity via photos, Tinder will ask them for another form of verification — a drivers’ license, for example. (The company said it would “take into consideration expert recommendations, input from our members, what documents are most appropriate in each country, and local laws and regulations, as it determines how the feature will roll out.”)
“An ID verification offers that person a way to say, I've proven to Tinder that I'm real, without having to necessarily show their face if that's something you're not comfortable doing,” said Rory Kozoll, head of trust and safety product at Tinder, in an interview.
The feature is still in development, and there is no timeline for its release (beyond “in the coming quarters”). But Tinder has been testing ID verification in Japan since 2019, and the company says it has worked well enough there that it is planning to release it globally. When that happens, you could choose to verify yourself both through your photos and your ID, with both checks appearing on your profile. (The company hasn’t decided exactly how it will represent this visually on the profile.) And as Tinder gets more and more people to participate, trust in its network will increase.
“We’ve heard from customers from the beginning that authenticity was the biggest issue they had,” said Kozoll, who joined Tinder four years ago. “Just knowing: is this person who they say they are?”
The company declined to tell me how many people have chosen to verify their profiles, but said it’s by far the most popular safety-related feature the company has introduced to date. Tinder also said verified profiles are more likely to “have success” on Tinder — more matches, more conversations. It’s also helping the company counter the perception that its app, like any dating app, is overrun with catfish.
“We’re definitely starting to see in our survey data and qualitative research that people are turning the corner on that,” Kozoll said. “Starting to feel like more of the people they see on Tinder are real. And we can correlate it with the introduction and adoption of photo verification.”
Kozoll told me that, as on Twitter, users are sometimes verified even when they don’t match. But “the volume is extremely low, almost to be statistically insignificant,” he said.
In any case, he said, the benefits to verifying anyone who asks have outweighed the risks of the occasional false positive.
Looking at Tinder’s success here, I can’t help but wonder what would happen if Facebook, Instagram, Twitter, or even YouTube took a similar approach. (You need 100,000 subscribers to request verification on YouTube.) Photo verification could help platforms sort out human beings from bots — an increasingly urgent need in a world where images generated by artificial intelligence can often fool the naked eye. ID verification could help people prove their authenticity even if they aren’t considered notable figures.
I’d also love to see platforms let people optionally verify their employers through an email address, so we’d know if all those scientists, academics, and law enforcement officers are who they say they are, whenever events conspire to bring them into the daily news cycle.
I asked Kozoll to what extent he thought Tinder’s verification measures could be adopted by the industry at large.
“I don’t want to presume to know more about other businesses than they do,” he said. “But I think what we’re doing today is a viable approach for dating apps.”
He noted that Tinder’s ideas here are not totally original — the company took inspiration from gig economy startups that require drivers to verify their photos in a similar way.
“I think businesses that are bringing people together in real life … have more interest in making sure that those people have a higher level of authenticity,” he said. “The same rules may apply more broadly in social media, but I understand the concerns around free speech and anonymity.”
But the key lesson from Tinder is that verification doesn’t have to be a binary, yes-or-no choice. Platforms can choose to let users verify aspects of their identity in different ways, granting different badges or other benefits based on what people choose to share. If they do, I imagine they’ll see what Tinder did: a network where, over time, more people come to trust one another.
In the meantime, you’re likely to be able to verify yourself twice on Tinder before you can do it once anywhere else. As Twitter reconsiders how to verify its user base yet again, Tinder has offered it a compelling roadmap. Here’s hoping Twitter swipes it.
Yesterday I wrote about the platforms’ Taliban dilemma: most of them have long banned the group, often citing legal requirements. But it seems increasingly likely that the group will be recognized by many world governments, possibly forcing platforms to backtrack. Like many of you, I find this all heartbreaking. But for platforms, it may be a matter of realpolitik.
One confusing aspect of the case is that despite its crimes against humanity, the Taliban is for some reason omitted from the State Department’s list of foreign terrorist organizations, which helps to dictate which organizations tech companies will deplatform. YouTube let me know that its policy rationale for banning the group is based on the fact that the Taliban is on the Treasury Department’s sanctions list.
But the fact that the Taliban isn’t on the FTO list helps explain why Twitter, for example, does not have a blanket ban on Taliban accounts. And when the time comes, it could become the basis for tech companies deciding to grant the Taliban a platform.
Taliban links for today: More than 100 new official or pro-Taliban accounts have popped up on Facebook, YouTube, and Twitter in recent days. And here’s a look at the growing sophistication of the Taliban’s social media strategy.
Today in news that could affect public perception of the big tech companies.
⬆️ Trending up: Pinterest added search filters to help users find different hair textures. “When users search for hairstyles, new filter options — for coily, curly, wavy, and straight textures, as well as shaved / bald and protective styles — will appear under the search bar.” Inclusive! (Kait Sanchez / The Verge)
🔃 Trending sideways: Facebook unveiled its list of the most-viewed content on the platform for the quarter, and it told us very little. We need a real-time view into the platform; and had one with CrowdTangle, whose team has now been scattered throughout the organization. Telling us that youtube.com is the most-shared domain on Facebook every four months isn’t nothing, exactly. But it’s disappointingly close. (Issie Lapowsky / Protocol)
Related: Why is the ninth-most shared domain on Facebook a speakers agency for former Green Bay Packers players? (Ethan Zuckerman)
⭐ Misinformation about COVID-19 is running rampant in comments at public forums, including city council and school board meetings. It has gotten so bad that platforms are sometimes called to intervene after the videos are uploaded. Here are David Klepper and Heather Hollinghurst at the AP:
During a meeting of the St. Louis County Council earlier this month, opponents of a possible mask mandate made so many misleading comments about masks, vaccines and COVID-19 that YouTube removed the video for violating its policies against false claims about the virus.
“I hope no one is making any medical decisions based on what they hear at our public forums,” said County Councilwoman Lisa Clancy, who supports mask wearing and said she believes most of her constituents do too. The video was restored, but Clancy’s worries about the impact of that misinformation remain.
Amazon emailed third-party sellers in an effort to bring them into its fight against antitrust legislation. The company wants to set up meetings with them lest they decide to side with Lina Khan and fight. (Annie Palmer / CNBC)
Apple dismissed a security researcher’s finding that two different images would generate the same hash, complicating efforts to discover child abuse imagery. The flaw discovered by researchers will not be present in the software’s final version, it said. (Russell Brandom / The Verge)
China found Tencent-owned WeChat guilty of illegally transferring data and order it to make changes by the end of the month. “The regulator said the apps had illegally transferred users' contact list and location data, while also harassing them with pop-up windows.” (Reuters)
But: Despite regulatory pressure, Tencent still earned $6.6 billion in profits in the most recent quarter. (Iris Deng and Josh Ye / South China Morning Post)
The continuing surges of COVID-19 have made it more difficult for Apple, Google and others to shift production away from China. Google reportedly planned to build its forthcoming Pixel 6 in Vietnam but had to change plans due to engineering shortages and travel restrictions. (Chen Ting-Fang and Lauly Li / Nikkei)
Administrators of anti-vaxx Facebook groups are granting moderators “group expert” badges, leveraging platform features to obtain false credibility. I understand Facebook doesn’t want to be the arbiter of who is an “expert” in every Facebook group, but it raised that question when it shipped the feature. (David Gilbert / Vice)
ByteDance joined the Open Invention Network, the world’s largest patent non-aggression consortium. The group “protects Linux and related open-source software and the companies behind them from patent attacks and patent trolls.” (Steven J. Vaughan-Nichols / ZDNet)
⭐ Facebook vendor Accenture is requiring some contractors to return to the office amid a COVID-19 surge, even as Facebook itself has delayed a return until next year. Here’s Salvador Rodriguez at CNBC:
Moran is on a team of approximately 18 Accenture contractors who generate data for Facebook’s machine-learning models. The team had been working remotely for more than a year when they were informed on July 15 that they’d be needed back at Facebook’s Mountain View, California, offices.
The contractors were given two weeks to prepare for the return. The workers would need to receive a note from a doctor excusing them from the office return or come back. The contractors were told that if they needed more time, they would need to dip into their 10 days of personal time off or not get paid, Moran said.
YouTube added thumbnail previews for chapters within videos. Could be useful for researchers, among other people. (Amanda Silberling / TechCrunch)
Twitter added support for Spaces in its API. Developers can now build Spaces discovery into their apps; the resurrection of the Twitter API is one of the happiest stories of the year. (Sarah Perez / TechCrunch)
MobileCoin, the cryptocurrency startup that may soon integrate with Signal, raised $66 million. It’s now valued at $1 billion. (Connie Loizos / TechCrunch)
Roblox is struggling to moderate recreations of mass shootings. The Verge found instances of the phenomenon by simply querying “Christchurch” in in-app search. (Russell Brandom / The Verge)
Those good tweets
Talk to me
Send me tips, comments, questions, and verified profiles: firstname.lastname@example.org.