What it's like to make policy at TikTok
Former policy manager Marika Tedroff talks nudity, China, and a "very toxic" work culture
Now that TikTok is the biggest app in the world, it’s more important than ever to get a sense of how it works. And yet between the company’s Chinese origins, COVID-enforced remote work, and standard-issue tech industry secrecy around product development, I’ve had relatively few chances to meet current and former employees and get to know the place.
That’s why I was interested last week to see this blog post from Marika Tedroff. From December 2020 to June of this year, Tedroff was a policy manager at TikTok, where she helped to develop content policies for ads and creator monetization in Europe, the Middle East, and Africa. Originally from Sweden, Tedroff also worked with the company’s trust and safety teams, working on subjects including adult content, nudity, weight loss and body image issues; cannabidiol (CBD), child safety, and platform integrity.
“I joined in 2020 during lockdown, and given the rapid growth TikTok experienced over the past few years, my role shifted every month,” said Tedroff, who joined the company after completing a master’s degree in technology policy. “I mainly did policy work, but also led various efforts to improve moderation processes and systems on the product side.”
In her blog post, Tedroff writes about a perennial challenge in content moderation: creating a workable policy around nudity. Different cultures have vastly different norms around the subject, but platforms typically take a one-size-fits-all approach to the issue, creating complications for everyone involved.
Tedroff describes the company’s challenges in developing such a policy for ad products and operationalizing it. “Ad creatives & landing page must not focus on individual intimate body parts, such as a genitalia, buttocks, breasts,” the policy reads today — but what counts as “focus”? As with so many questions in content moderation, go one level deeper than the text and you’re already in the realm of the philosophers.
She also reflects on the sometimes absurd situations that policy managers find themselves in. “Coming to TikTok and having to say the full version of the abbreviation ‘WAP’ in front of 150 people in my first weeks … was a bit of a shock to the system,” she writes.
All in all, Tedroff says her experience at TikTok was mixed. “TikTok was both a fantastic place to work, and very toxic at the same time,” she writes. “Some teams have awful work-life balance, moderators are treated bad, the Chinese working style and culture is very, very different to ours, and the company is not very transparent.” At the same time, she said, she felt like her work made a difference.
I asked Tedroff about making policies, ByteDance’s influence, and why the workplace could lead to burnout for her colleagues. Highlights of our conversation follow; this interview has been lightly edited and condensed.
Tedroff says she plans to write more about her experience in the future; you can find her Substack, Zoning Out, at this link.
Casey Newton: What’s the basic story of how policies are developed at TikTok? How many people are involved, and who is the ultimate decider? Do decisions stop with TikTok’s CEO, or ByteDance’s?
Marika Tedroff: The policy team I was part of essentially acted as a spider in the web, and was responsible for gathering all the information, feedback, and approvals required to launch a policy. This included getting input from legal, public policy, PR, minor safety, and very often advice from external experts. While the general process was the similar for all policies, the specific steps varied depending on why the policy was updated and the specific content area and market it aimed to cover.
Firstly, there are so, so many reasons for why a policy has to be updated! This matters. One reason why policies exist is to protect against risk, and risk is in a constant state of change. A policy might be updated because of new local laws or regulations TikTok has to comply with, or because of policy gaps, or perhaps a world event has shifted the risk levels on the platform (e.g., COVID and an increase of medical misinformation). We also updated policies simply because moderation inconsistency was high, meaning enforcement was not successful.
In general, after mapping out all the baseline restrictions and limitations, such as legal restrictions in the target markets and input from stakeholders, the policy manager and team would draft a policy that would go through another round of review, approvals, and training before it went live. My experience writing policies at TikTok was in general positive and the process was rigorous yet quick. My team was responsible for reflecting everyone’s voice in the final version, while making sure we could also enforce the policy at scale (which is the hardest part).
You write that shortly after getting started, you were tasked with developing policies around nudity. It’s a famously difficult job — both to write the policies and to enforce them. How did those policies change while you were there? And how did TikTok get better at enforcing them?
Everything related to adult content is complicated because it is so sensitive and subjective. The policies were iterated and updated whenever there was a need for it, such as new scenarios that weren't covered by the policies or to address emerging risks (e.g., a new trend that triggers content perceived as harmful). Overall, I would say TikTok got better at enforcing them because we learned through rounds of iterations what worked and what didn’t.
One of the most complex challenges with adult content is the diversity of perspectives in the different markets. What is considered too much skin exposure or too sexual varies massively in different markets, as all countries have distinct cultures and attitudes on this specific area. That results in highly localized policies to reflect these different opinions.
I’d say this is one of the areas where every market’s opinion and attitude would vary the most. An important aspect of adult content is the question of context and intent — if someone is posting explicit content to promote their underwear brand, or is posting videos of sexualized dancing, but as part of a challenge — should we still remove it? Whenever we saw new scenarios like these, we would make policy changes with the intention of capturing intent and context more accurately. It was very hard!
You write that you encountered various “operational challenges” in trying to remove misinformation and spam. What were some of them?
While I was not involved with removing scam accounts and posts — the risk investigation teams are — we still had to consider this when drafting policies, especially when drafting policies for monetized products.
The biggest challenge with bad actors is that they are so sophisticated and intelligent! It was honestly quite cool to see the creativity and the approaches they were using to circumvent the policies. It was challenging to draft guidelines for content that isn’t obviously problematic at first glance. Videos might look harmless, but looking into it more closely we could see that bad actors redirected users from an innocent landing page to another site for prohibited services — drugs, porn, you name it, the internet is truly a trash can from the inside.
Technically, this extensive review process was beyond the scope and liability of TikTok as a platform. Still, when I worked there, we often wanted to protect users from harm, even if it happened on sites outside the app.
I worked very little on misinformation, but just like capturing bad actors, capturing misinformation is problematic because it’s rarely evident at face value. The volume and impact of misleading or fake content also vary depending on what is happening in the world. For instance, TikTok focused a lot on COVID misinformation during its peak, and before I left, they were focused on Ukraine/Russia in terms of capturing false claims and misinformation. During events like this, very often we would see an increase in misinformation, both in terms of sharing inaccurate facts, but also more ‘sophisticated’ misleading content where people would exploit the conflict by, for instance, raising money for organizations that don’t exist.
It’s hard to control everything as a crisis is unfolding, since no one really knows what is true or not at that point. If we immediately removed content, users felt frustrated for silencing important information. But if we waited until we could verify the claims, users felt frustrated because we facilitated and spread fake news … and, as we all know, things can go viral quickly at TikTok.
I think the most terrifying aspect of misinformation/misleading content is the subtle, yet so, so powerful influence content have on us when it’s not explicit, as you rightly pointed out here.
Recently we’ve seen some reports about how difficult the work environment is at TikTok, and you yourself say here that it could be “very toxic.” How so? Are there ways you think it’s different from other tech companies its size?
As I also wrote in my post, several truths and experiences can coexist. I’ve heard many stories from colleagues who have had tough and toxic experiences, which are all valid. Some teams, often but not limited to engineering and product, seemed to experience much worse work-life balance. They had more exposure to the infamous non-inclusive culture (e.g., meetings were held in Chinese despite English speakers on the call and people were ignored). And there were tricky situations where decisions were made at HQ without looping in relevant, market-specific employees. It’s hard to do your work properly in a context like this, which must be incredibly frustrating. I have never worked at any other big company before TikTok, so I am not sure if this was unique for TikTok or just a “big company” thing.
I was personally pretty protected from the negative aspects; I had a supportive manager, and my work was very European Union-focused. While I worked a lot with the American market, my work hours were still alright, and I didn’t have to participate that much in the non-inclusive culture I mentioned. I might be biased because many people I worked with shared my views on what was considered normal.
As for corporate culture, the Chinese working style is very different from the American and European one. The negative bits that I thiiink come from Chinese ways of working include long (but not necessarily efficient or productive) hours, hierarchical structures, non-inclusive culture, lack of market understanding, and the company is not so transparent. Some more positive bits that I think come from Chinese ways of working that we can learn from include their ability to leverage operations and manpower, technical capacity, and bias towards efficiency and productivity.
It’s easy to complain about Chinese company culture because so many things just don’t make sense (e.g., people were promoted without receiving a salary increase?? Why are there managers with 100+ direct reports?? Why aren’t they listening to our concerns??) But it’s not black and white; our way of working is probably equally strange to them.
One thing we American reporters are always trying to understand is how being a Chinese company influences TikTok, both at the corporate level and the level of the policies that people like yourself wrote there. How did you come to think about that?
The fact that it’s Chinese definitely influences the company. Just like Swedish companies will be more Swedish, and American companies will be more American. Chinese values and working culture are just so different to the culture we are used to, and some aspects that are considered normal in Chinese working culture would be almost unethical to us.
ByteDance still has a lot of influence and control over aspects such as product decisions. It was always surprising to me how much influence they had over some decisions, even in situations where it didn’t make sense for them to have decision-power. But on a policy level, regional managers were ultimately in charge of their own markets. I did not have to loop in or get approval from any contact in China to ship policies.
To me, back in 2020, many of the policies felt ad hoc and not culturally adapted to the specific markets. Without taking into account market nuances, cultures, religions, and attitudes in the target markets, policies can be perceived as offensive and unethical. And this was the case I experienced at TikTok before they were changed.
I never got the impression that any of this had bad intent, but rather due to a lack of awareness. That’s the core issue with many technology policy issues though; even though the intent isn’t necessarily malicious, if the output is flawed, skewed in a negative direction, or impact certain groups negatively, it doesn’t really matter how or why. Fix it.
I also believe the company’s hierarchical culture influenced these kinds of decisions; completing the task was more important than doing it right and properly. That never made sense to me because when operating at scale, that mindset would more often than not create more problems.
There’s also a contingent in the United States that says maybe TikTok should be banned outright, just because we’ll never be able to ensure that China isn’t using it to surveil Americans or run influence campaigns. Having worked there, how possible do you think that would be?
I don’t think it’s very likely, yet. It seems to be more of a “what if” at this stage. But even if you are working at TikTok, you don’t have visibility into everything going on.
Governing
- A look at Republicans’ furor over Gmail’s spam filters, which they blame — groundlessly — for the decline in their fundraising numbers. (Isaac Stanley-Becker and Josh Dawsey / Washington Post)
- A California judge said Rumble’s antitrust lawsuit against YouTube can proceed. It accuses Google of giving preference to its video platform on Android and in search results. (Adi Robertson / The Verge)
- YouTube demonetized two videos from Jordan Peterson in which he misgendered actor Elliot Page and compared gender affirmation care to “Nazi-era medical experimentation.” (Ina Fried / Axios)
- Fact checkers on Facebook and Instagram are labeling as false claims that the United States has entered a recession, drawing criticism that the platforms have overreached. (Robby Soave / Reason)
- Meta released the Social Capital Atlas: “new datasets and research insights that use data from Facebook friendships to provide measures of social capital across ZIP codes, high schools and colleges in the United States.” (Meta)
- A look at “algorithmic anxiety”: “Besieged by automated recommendations, we are left to guess exactly how they are influencing us, feeling in some moments misperceived or misled and in other moments clocked with eerie precision.” (Kyle Chayka / New Yorker)
Industry
- Twitter sent subpoenas to 17 banks that helped Elon Musk put together his agreement to buy the company, part of an apparent strategy to bury him in legal requests. (Kevin T. Dugan / New York)
- Twitter also issued subpoenas to various Musk associates Musk, including investors Chamath Palihapitiya, David Sacks, Steve Jurvetson, Marc Andreessen, Jason Calacanis, and Keith Rabois. (Elizabeth Dwoskin and Faiz Siddiqui / Washington Post)
- “Pinterest shares jumped on better-than-expected user numbers even as earnings and revenue missed estimates and the company gave weak guidance for the third quarter.” (Jonathan Vanian / CNBC)
- A charming, personal account of 10 years spent on (straight) Tinder finds that the platform excels at arranging mediocre hookups, and little else. (Allison P. Davis / New York)
- Another day, another $190 million drained out of a crypto bridge due to a security exploit. (Brian Newar / CoinTelegraph)
Those good tweets
weed got me opening doors like im in resident evil 1
— amber🌪 (@addviolins) 11:30 PM ∙ Jul 29, 2022
phones need to try to be a little more honest about whether or not they actually have signal
— charlie (@chunkbardey) 8:31 PM ∙ Jul 30, 2022
idea: a gameshow called Imposter Syndrome where you take 10 senior developers and tell them that one of them is actually just a sales guy who's been taught to say DevOps buzzwords. to win, they have to figure out who the fake dev is. the trick: there is no sales guy
— Cain Maddox (@ctrlshifti) 8:17 AM ∙ Aug 1, 2022
this is a crazy advertisement
— tia (@califortia) 3:27 AM ∙ Aug 1, 2022
Talk to me
Send me tips, comments, questions, and TikTok stories: casey@platformer.news.