DescriptionAI Security Manager Role PurposeOperationalize AI system security for products that embed ML or generative‑AI capabilities. Build controls spanning model, data, prompt, and supply-chain risk while integrating with existing Secure SDLC/DevSecOps practices.Key OutcomesPublish an AI Security Control Baseline (model inventory, data lineage, red‑teaming, evaluations, guardrails).Integrate AI checks into CI/CD: dataset governance, model/package SBOM, safety/toxicity tests, prompt policies.Run AI misuse/abuse testing (jailbreaks, prompt injection, data exfiltration) with repeatable playbooks.Establish Responsible AI review with Privacy/Legal and GRC; document residual risk.ResponsibilitiesBuild and maintain model and dataset inventories; define criticality tiers and protections.Drive AI red‑team exercises and security evaluations; partner with Product and Data Science teams.Define secrets, keys, and model endpoint hardening (authorization, rate limiting, content filters).Monitor supply chain for models, libraries, and embeddings; enforce signing and provenance.Produce executive‑readable risk narratives and dashboards.Required Qualifications6–8+ years in security with 2–3 in AI/ML security or closely related fields.Familiarity with LLM risks, RAG, and model hosting patterns (e.g., Azure OpenAI).Proficiency in Python or scripting for test harnesses and CI/CD integrations.Experience with privacy and data protection in EMEA (GDPR, data minimization).Preferred QualificationsExperience in AI red teaming and safety evaluations; exposure to Azure AI or GCP Vertex.Certifications: CCSP, CISSP, Azure AI Engineer (or equivalent).Key Performance Indicators (KPIs)Coverage of AI features by the control baseline (%).Red‑team findings trend and closure rate.Model/data inventory completeness.Safety test coverage and regression stability.#LI-KS1