Join to apply for the Engineering Analyst, Content Adversarial Red Team role at Google
Fast‑paced, dynamic, and proactive, YouTube’s Trust & Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to create and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust & Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever‑evolving digital world.
About the Job
We are seeking a pioneering expert in Artificial Intelligence (AI) Red Teaming to shape and lead our content safety strategy. In this pivotal role, you will design and direct red‑teaming operations, creating innovative methodologies to uncover novel content abuse risks. You will act as a key advisor to executive leadership, leveraging your influence across Product, Engineering, and Policy teams to drive safety initiatives. As a senior member of the team, you will mentor analysts, fostering a culture of continuous learning and sharing your expertise in adversarial techniques. You will also represent Google’s AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.
Responsibilities
Design, develop, and oversee the execution of innovative red‑teaming strategies to uncover content abuse risks. Create and refine new red‑teaming methodologies, strategies, and tactics.
Influence across Product, Engineering, Research, and Policy to drive the implementation of safety initiatives. Be a key advisor to executive leadership on content safety issues, providing actionable insights and recommendations.
Mentor and guide junior and senior analysts, fostering excellence and continuous learning within the team. Act as a subject‑matter expert, sharing knowledge of adversarial and red‑teaming techniques, and risk mitigation.
Represent Google’s AI safety efforts in external forums and conferences. Contribute to the development of industry‑wide best practices for responsible AI development.
Be comfortable with exposure to graphic, controversial, or upsetting content.
Qualifications
Bachelor’s degree or equivalent experienceli>
7 years of experience in trust and safety, risk mitigation, cybersecurity, or related fields.
7 years of experience with one or more of the following languages: SQL, R, Python, or C++.
6 years of experience in adversarial testing, red‑teaming, jailbreaking for trust and safety, or a related field, with a focus on AI safety.
Experience with Google infra/tech stack and tooling and Application Programming Interface (API).Experience in web service, collab deployment, SQL and data handling, Machine Learning Operations (MLOps) or other AI infrastructure.
Master’s degree or PhD in a relevant quantitative or engineering field (preferred).
Experience in an individual contributor role within a technology company, focused on product safety or risk management (preferred).
Experience working closely with both technical and non‑technical teams on dynamic solutions or automations to improve user safety (preferred).
Understanding of AI systems/architecture including specific vulnerabilities, machine learning, and AI responsibility principles (preferred).
Ability to influence cross‑functionally at various levels and articulate technical concepts to both technical and non‑technical stakeholders (preferred).
Excellent written and verbal communication and presentation skills (preferred).
Equal Employment Opportunity
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google’s EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
#J-18808-Ljbffr