Responsibilities
:
1. Develop and implement new state-of-the-art and novel fine-tuning methodologies for pre-trained models with clear performance targets.
2. Build, run, and monitor controlled fine-tuning experiments while tracking key performance indicators. Document iterative results andpare against benchmark datasets.
3. Identify and process high-quality datasets tailored to specific domains. Set measurable criteria to ensure that data curation positively impacts model performance in fine-tuning tasks.
4. Systematically debug and optimize the fine-tuning process by analyzingputational and model performance metrics.
5. Collaborate with cross-functional teams to deploy fine-tuned models into production pipelines. Define clear success metrics and ensure continuous monitoring for improvements and domain adaptation.
6. A degree inputer Science or related field. Ideally PhD in NLP, Machine Learning, or a related field,plemented by a solid track record in AI R&D (with good publications in A* conferences).
7. Hands-on experience with large-scale fine-tuning experiments, where your contributions have led to measurable improvements in domain-specific model performance.
8. Deep understanding of advanced fine-tuning methodologies, including state-of-the-art modifications for transformer architectures as well as alternative approaches. Your expertise should emphasize techniques that enhance model intelligence, efficiency, and scalability within fine-tuning workflows.
9. Strong expertise in PyTorch and Hugging Face libraries with practical experience in developing fine-tuning pipelines, continuously adapting models to new data, and deploying these refined models in production on target platforms.
10. Demonstrated ability to apply empirical research to ovee fine-tuning bottlenecks. You should befortable designing evaluation frameworks and iterating on algorithmic improvements to continuously push the boundaries of fine-tuned AI performance.
Job ID CaTtJMaEJpXH