About the Role
We are looking for a Senior Analyst with an applied scientist mindset to join the Account Risk Management team. This is a senior individual contributor role with meaningful ownership and authority. You’ll design large‑scale enforcement systems—developing novel signals, running experiments, and deploying ML‑driven solutions to identify malicious and coordinated networks at scale. The role requires a scientific mindset: forming hypotheses, building measurement frameworks, and iterating through experiments while balancing research depth with business impact.
Responsibilities
Design enforcement systems from the ground up by developing novel behavioral, device, and graph signals to identify coordinated networks and adversarial actors at scale.
Research emerging attack patterns, investigate new adversarial behaviors, prototype detection approaches, and validate signal effectiveness through rigorous analysis.
Evaluate, fine‑tune, and integrate machine learning and anomaly‑detection models to assess account‑level risk and surface suspicious patterns in near real time.
Conduct research and development on behavioral signals—identify, design, and validate new features or attributes that improve detection coverage and enforcement precision.
Contribute to continuous improvement of detection and enforcement pipelines through rule refinement, feedback loops, and model retraining cycles.
Communicate insights through concise reports, dashboards, and presentations that drive executive and operational decisions.
Qualifications
Minimum Qualifications
Bachelor’s degree in Computer Science, Statistics, Data Science, or a related quantitative field.
At least 5 years of experience in trust & safety, fraud detection, or threat intelligence.
Strong SQL and Python skills for data exploration, analysis, and pipeline automation.
Experience applying machine learning and anomaly‑detection techniques to large, adversarial datasets.
Experience with graph analytics or entity‑resolution techniques for detecting coordinated or linked behaviors.
Strong written and verbal communication skills; able to distill complex analytical findings for diverse audiences.
Preferred Qualifications
Practical experience applying AI/ML techniques to solve diverse business challenges.
Hands‑on experience with distributed computing frameworks (Spark, Hive) for large‑scale data processing.
Background in trust and safety‑focused roles, with a track record of mitigating risks and ensuring platform integrity.
Publication or presentation history demonstrating analytical depth.
Experience designing and running controlled experiments in production systems.
Trust & Safety
Content that this role interacts with includes images, video, and text related to everyday life, but may also contain bullying, hate speech, child safety concerns, depictions of harm to self and others, and harm to animals. Hence, exposure to harmful content on a daily basis is possible.
Employee Well‑Being & Support
TikTok is committed to the wellbeing of all employees and provides comprehensive evidence‑based programs to promote physical and mental wellbeing throughout each employee’s journey with us. This role may be psychologically demanding and emotionally taxing, and employees receive support through integrated wellbeing initiatives.
#J-18808-Ljbffr