Model Policy Lead, Video Policy - Trust & Safety
Responsibilities
* TikTok’s Trust & Safety team is seeking a Model Policy Lead for Short Video and Photo to govern how enforcement policies are implemented, maintained, and optimized across both large-scale ML classifiers and LLM-based moderation systems. You will lead a team at the center of AI-driven Trust and Safety enforcement, building Chain-of-Thought policy logic, RCA and quality pipelines, and labeling strategies that ensure our automated systems are both accurate at scale and aligned with platform standards.
* This role combines technical judgment, operational rigor, and policy intuition. You\'ll work closely with Engineering, Product and Ops teams to manage how policy is embedded in model behavior, measured through our platform quality metrics, and improved through model iterations and targeted interventions. You’ll also ensure that policy changes — often made to improve human reviewer precision — are consistently iterated across all machine enforcement pathways, maintaining unified and transparent enforcement standards.
* You will lead policy governance across four model enforcement streams central to TikTok’s AI moderation systems:
1. At-Scale Moderation Models (ML Classifiers) - Own policy alignment and quality monitoring for high-throughput classifiers processing hundreds of millions of videos daily.
2. At-Scale AI Moderation (LLM/CoT-Based) - Oversee CoT-based AI moderation systems handling millions of cases per day, producing CoT, labeling guidelines, and dynamic prompts to interpret content and provide a policy assessment.
3. Model Change Management - Ensure consistent enforcement across human and machine systems as policies evolve.
4. Next-Bound AI Projects (SOTA Models) - Drive development of high-accuracy, LLM-based models used to benchmark and audit at-scale enforcement.
* Collaborate with Engineering, Product, Ops, and Policy to align on enforcement strategy, rollout coordination, and long-term model enforcement and detection priorities.
This is a high-impact leadership role requiring strong policy intuition, data fluency, and curiosity about how AI technologies shape the future of Trust and Safety. You’ll work with stakeholders across Product, Engineering, Responsible AI, Ops, and Policy.
Qualifications
* Minimum Qualifications:
* You have at least 5 years of experience in Trust & Safety, ML governance, moderation systems, or related policy roles.
* You have experience in managing or mentoring small to medium-sized teams that are diverse and global.
* You have a proven ability to lead complex programs with global cross-functional stakeholders.
* You have a strong understanding of AI/LLM systems, including labeling pipelines, and CoT-based decision logic.
* You are comfortable working with quality metrics and enforcement diagnostics, including FP/FN tracking, RCAs, and precision-recall tradeoffs.
* You are a confident self-starter with excellent judgment, capable of balancing multiple trade-offs to develop principled, enforceable policies. You can translate complex challenges into clear language and persuade cross-functional partners in a dynamic environment.
* You have a bachelor’s or master’s degree in artificial intelligence, public policy, politics, law, economics, behavioral sciences, or related fields.
* Preferred Qualifications:
* Experience working in a start-up or as part of new teams in established companies.
* Experience in prompt engineering.
About TikTok
TikTok is the leading destination for short-form mobile video. Our mission is to inspire creativity and bring joy. Our global headquarters are in Los Angeles and Singapore, with offices worldwide.
Why Join Us
Inspiring creativity is at the core of TikTok's mission. Our team is diverse and global, and we strive to do great things with great people. We foster curiosity, humility, impact, and an "Always Day 1" mindset to achieve meaningful breakthroughs for our users.
Diversity & Inclusion
TikTok is committed to creating an inclusive space where employees are valued for their skills and perspectives. We celebrate diverse voices and strive to reflect the communities we reach.
Trust & Safety
Keeping our platform safe can be demanding. We provide wellbeing programs and support to promote physical and mental health throughout each employee\'s journey with us. We work collaboratively to ensure a person-centered, innovative approach.
#J-18808-Ljbffr