Client:Global Technology & Social Media CompanyContract Length:11 monthsLocation:100% onsite – Dublin 4 (South Dublin, large tech campus)Working Hours:Monday–Friday, 9:00am–6:00pmSalary:€73,000 per annum (pro rata)Team:Integrity / Trust & Safety OperationsAbout the RoleOur client is aglobal leader in social media and digital platforms, operating products used by billions of people worldwide. They are launching a newAI-focused Integrity Operations pilotdesigned to improve how content is reviewed, classified, and moderated at scale.This role sits at the intersection ofTrust & Safety, policy enforcement, and AI training. You will support the development and evaluation of AI systems that help keep online platforms safe.Think of this role as being ahuman coach for AI, teaching it what content is acceptable, what violates policy, and why.What you'll be doingReview and annotate user-generated content in line with platform policiesApply policy reasoning and clearly document moderation decisionsSupport AI training by creating and maintaining high-quality benchmark ("golden") datasetsReview AI-generated moderation decisions for accuracy and consistencyCollaborate with cross-functional teams including Product, Engineering, and IntegrityMeet daily productivity and quality targets in a fast-paced environmentContribute insights to improve moderation workflows and AI performanceThis role involves regular exposure to sensitive and potentially disturbing content, including:Candidates must becomfortable and resilientworking with this material on a daily basis.What we're looking forRequired:2–4+ years' experience in content moderation, trust & safety, quality assurance, investigations, or policy-based rolesExperience working in structured, target-driven or quota-based environmentsStrong critical thinking and policy interpretation skillsExcellent written and verbal communication skillsHigh attention to detail and consistencyAbility to work fully onsite, 5 days per weekNon-Negotiables:Willingness to work with graphic or objectionable contentAbility to meet SLAs and productivity targetsExperience following strict guidelines and operational processesNice to HaveExperience reviewing AI-generated content or chatbot conversationsFamiliarity with annotation or content moderation toolsExperience working with vendors or external partnersData analysis or QA reporting experienceWhy This Role?Work on large-scale, real-world AI systemsContribute directly to online safety and platform integrityGain experience within a world-class Trust & Safety operationCompetitive contract compensation and structured working hours