Job Opportunity
We are seeking an innovative professional in Artificial Intelligence (AI) Red Teaming to shape and lead our content safety strategy.
In this pivotal role, you will come with considerable direct experience in adversarial testing and red teaming, particularly of Generative AI, so that you design and direct red teaming operations, creating innovative methodologies to uncover novel content abuse risks.
You will act as a key advisor to executive leadership, leveraging your influence across Product, Engineering, and Policy teams to drive safety initiatives.
As a senior member of the team, you will mentor analysts, fostering a culture of continuous learning and sharing your expertise in adversarial techniques.
You will also represent our organization's AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.
Responsibilities:
* Develop and oversee the execution of innovative red teaming strategies to identify and mitigate content abuse risks.
* Create and refine new red teaming methodologies, strategies, and tactics.
* Drive the implementation of safety initiatives through collaborative efforts with cross-functional teams.
* Provide actionable insights and recommendations to executive leadership on content safety matters.
* Mentor junior and senior analysts, promoting excellence and continuous growth within the team.
* Establish yourself as a subject matter expert, sharing knowledge of adversarial and red teaming techniques, and risk mitigation.
* Represent our organization's AI safety efforts in external forums and conferences.
* Contribute to the development of industry-wide best practices for responsible AI development.