Overview
The Live Containment team is responsible for mitigating emerging platform risks among Livestreams through comprehensive containment strategies. This team works to quickly and thoroughly contain violative content, bridging the gap between immediate incident response and long-term policy or product solutions. They analyze trends, identify enforcement gaps, and implement strategic interventions to prevent the spread of harmful content. By working closely across Trust & Safety team teams, they help strengthen moderation and model systems for the most severe platform risks and improve platform safety.
Responsibilities
* Conduct sweeps for violative content and trends across features to mitigate emerging risks
* Identify risk patterns and incentives for abuse, develop and implement tactical strategies and methods to detect and enforce risky content at scale; leveraging technical solutions such as SQL, Python, and machine learning.
* Partner with rapid response teams during crises and escalations, to ensure timely and thorough containment of high-risk content.
* Conduct root-cause-analyses to identify gaps in enforcement systems, and provide actionable insights and long-term recommendations.
* Operate with minimal guidance; assess and report on the effectiveness of mitigation strategies and workflows, iterating on strategies for continuous improvement. Balance precision and recall of strategies through a combination of manual, scaled, and automated review methods.
* Maintain documentation and insights sharing to support ongoing risk containment efforts
* Ensure adherence to content policies and regulatory requirements while balancing enforcement effectiveness
Qualifications
Minimum Qualifications:
* Bachelor's degree in Engineering, Computer Science, Mathematics, Statistics, Analytics, a related technical field, or equivalent practical experience.
* Strong analytical skills with the ability to identify patterns, trends, and enforcement gaps in content moderation.
* Excellent written and verbal communication skills, with the ability to present findings and recommendations clearly.
* Basic proficiency in SQL and spreadsheet software to manage large data sources.
* Familiarity with automation tools, AI-driven content moderation, or data analysis techniques.
* Familiarity with content policies, regulatory requirements, and platform integrity challenges.
* Comfortable working in fast-paced environments with evolving risks and priorities.
Preferred Qualifications:
* Experience working in social media, tech platforms, or online content moderation at scale.
* Prior experience in incident management, crisis management, or risk mitigation strategies.
* Proficiency in Python, or other data analysis tools for investigating content patterns
#J-18808-Ljbffr