Artificial Intelligence Red Teaming Specialist
We are seeking a pioneering expert in AI Red Teaming to shape and lead our content safety strategy.
In this pivotal role, you will design and direct red teaming operations, creating innovative methodologies to uncover novel content abuse risks. Your extensive experience in adversarial testing and red teaming will help us stay ahead of emerging threats.
As a senior member of the team, you will mentor analysts, fostering a culture of continuous learning and sharing your expertise in adversarial techniques. You will also represent our efforts in external forums, collaborating with industry partners to develop best practices for responsible AI.
The successful candidate will be responsible for:
* Designing, developing, and overseeing the execution of innovative red teaming strategies.
* Creating and refining new red teaming methodologies, strategies, and tactics.
* Driving cross-functional collaboration to implement safety initiatives.
* Providing actionable insights and recommendations to executive leadership on content safety issues.