About the Role
* This position is at the intersection of Trust and Safety, policy enforcement, and AI training.
You will support development and evaluation of AI systems that help keep online platforms safe. Think of this role like being a human coach for AI teaching it what content is acceptable why it violates platform policies.
Key Responsibilities Include:
1. Reviewing annotating user-generated content according to platform guidelines
2. Applying policy reasoning clearly documenting moderation decisions
3. SUPPORTING-ai training by creating maintaining high-quality benchmark datasets review ai generated moderation decisions accuracy consistency collaborating cross-functional teams product engineering integrity meet daily productivity quality targets fast-paced environment contribute insights improve MODERATION WORKFLOWS-AI performanceRole involves exposure sensitive disturbing content including Candidates comfortable resilient working with material on daily basis.