What it's like to make policy at TikTok

Former policy manager Marika Tedroff talks nudity, China, and a "very toxic" work culture

What it's like to make policy at TikTok
(Eyestetix Studio / Unsplash)

Now that TikTok is the biggest app in the world, it’s more important than ever to get a sense of how it works. And yet between the company’s Chinese origins, COVID-enforced remote work, and standard-issue tech industry secrecy around product development, I’ve had relatively few chances to meet current and former employees and get to know the place.

That’s why I was interested last week to see this blog post from Marika Tedroff. From December 2020 to June of this year, Tedroff was a policy manager at TikTok, where she helped to develop content policies for ads and creator monetization in Europe, the Middle East, and Africa. Originally from Sweden, Tedroff also worked with the company’s trust and safety teams, working on subjects including adult content, nudity, weight loss and body image issues; cannabidiol (CBD), child safety, and platform integrity.

“I joined in 2020 during lockdown, and given the rapid growth TikTok experienced over the past few years, my role shifted every month,” said Tedroff, who joined the company after completing a master’s degree in technology policy. “I mainly did policy work, but also led various efforts to improve moderation processes and systems on the product side.”

In her blog post, Tedroff writes about a perennial challenge in content moderation: creating a workable policy around nudity. Different cultures have vastly different norms around the subject, but platforms typically take a one-size-fits-all approach to the issue, creating complications for everyone involved.

Tedroff describes the company’s challenges in developing such a policy for ad products and operationalizing it. “Ad creatives & landing page must not focus on individual intimate body parts, such as a genitalia, buttocks, breasts,” the policy reads today — but what counts as “focus”? As with so many questions in content moderation, go one level deeper than the text and you’re already in the realm of the philosophers.

She also reflects on the sometimes absurd situations that policy managers find themselves in. “Coming to TikTok and having to say the full version of the abbreviation ‘WAP’ in front of 150 people in my first weeks … was a bit of a shock to the system,” she writes.

All in all, Tedroff says her experience at TikTok was mixed. “TikTok was both a fantastic place to work, and very toxic at the same time,” she writes. “Some teams have awful work-life balance, moderators are treated bad, the Chinese working style and culture is very, very different to ours, and the company is not very transparent.” At the same time, she said, she felt like her work made a difference.

I asked Tedroff about making policies, ByteDance’s influence, and why the workplace could lead to burnout for her colleagues. Highlights of our conversation follow; this interview has been lightly edited and condensed.

Tedroff says she plans to write more about her experience in the future; you can find her Substack, Zoning Out, at this link.

Casey Newton: What’s the basic story of how policies are developed at TikTok? How many people are involved, and who is the ultimate decider? Do decisions stop with TikTok’s CEO, or ByteDance’s?

Marika Tedroff: The policy team I was part of essentially acted as a spider in the web, and was responsible for gathering all the information, feedback, and approvals required to launch a policy. This included getting input from legal, public policy, PR, minor safety, and very often advice from external experts. While the general process was the similar for all policies, the specific steps varied depending on why the policy was updated and the specific content area and market it aimed to cover.

Firstly, there are so, so many reasons for why a policy has to be updated! This matters. One reason why policies exist is to protect against risk, and risk is in a constant state of change. A policy might be updated because of new local laws or regulations TikTok has to comply with, or because of policy gaps, or perhaps a world event has shifted the risk levels on the platform (e.g., COVID and an increase of medical misinformation). We also updated policies simply because moderation inconsistency was high, meaning enforcement was not successful. 

In general, after mapping out all the baseline restrictions and limitations, such as legal restrictions in the target markets and input from stakeholders, the policy manager and team would draft a policy that would go through another round of review, approvals, and training before it went live. My experience writing policies at TikTok was in general positive and the process was rigorous yet quick. My team was responsible for reflecting everyone’s voice in the final version, while making sure we could also enforce the policy at scale (which is the hardest part). 

You write that shortly after getting started, you were tasked with developing policies around nudity. It’s a famously difficult job — both to write the policies and to enforce them. How did those policies change while you were there? And how did TikTok get better at enforcing them? 

Everything related to adult content is complicated because it is so sensitive and subjective. The policies were iterated and updated whenever there was a need for it, such as new scenarios that weren't covered by the policies or to address emerging risks (e.g., a new trend that triggers content perceived as harmful). Overall, I would say TikTok got better at enforcing them because we learned through rounds of iterations what worked and what didn’t. 

One of the most complex challenges with adult content is the diversity of perspectives in the different markets. What is considered too much skin exposure or too sexual varies massively in different markets, as all countries have distinct cultures and attitudes on this specific area. That results in highly localized policies to reflect these different opinions.

I’d say this is one of the areas where every market’s opinion and attitude would vary the most. An important aspect of adult content is the question of context and intent — if someone is posting explicit content to promote their underwear brand, or is posting videos of sexualized dancing, but as part of a challenge — should we still remove it? Whenever we saw new scenarios like these, we would make policy changes with the intention of capturing intent and context more accurately. It was very hard! 

You write that you encountered various “operational challenges” in trying to remove misinformation and spam. What were some of them?

While I was not involved with removing scam accounts and posts — the risk investigation teams are — we still had to consider this when drafting policies, especially when drafting policies for monetized products.

The biggest challenge with bad actors is that they are so sophisticated and intelligent! It was honestly quite cool to see the creativity and the approaches they were using to circumvent the policies. It was challenging to draft guidelines for content that isn’t obviously problematic at first glance. Videos might look harmless, but looking into it more closely we could see that bad actors redirected users from an innocent landing page to another site for prohibited services — drugs, porn, you name it, the internet is truly a trash can from the inside.

Technically, this extensive review process was beyond the scope and liability of TikTok as a platform. Still, when I worked there, we often wanted to protect users from harm, even if it happened on sites outside the app. 

I worked very little on misinformation, but just like capturing bad actors, capturing misinformation is problematic because it’s rarely evident at face value. The volume and impact of misleading or fake content also vary depending on what is happening in the world. For instance, TikTok focused a lot on COVID misinformation during its peak, and before I left, they were focused on Ukraine/Russia in terms of capturing false claims and misinformation. During events like this, very often we would see an increase in misinformation, both in terms of sharing inaccurate facts, but also more ‘sophisticated’ misleading content where people would exploit the conflict by, for instance, raising money for organizations that don’t exist. 

It’s hard to control everything as a crisis is unfolding, since no one really knows what is true or not at that point. If we immediately removed content, users felt frustrated for silencing important information. But if we waited until we could verify the claims, users felt frustrated because we facilitated and spread fake news … and, as we all know, things can go viral quickly at TikTok. 

I think the most terrifying aspect of misinformation/misleading content is the subtle, yet so, so powerful influence content have on us when it’s not explicit, as you rightly pointed out here.

Recently we’ve seen some reports about how difficult the work environment is at TikTok, and you yourself say here that it could be “very toxic.” How so? Are there ways you think it’s different from other tech companies its size?