Join Tether and Shape the Future of Digital Finance
At Tether, we’re pioneering a global financial revolution with innovative solutions that enable seamless integration of reserve-backed tokens across blockchains. Our offerings include:
* Tether Finance: Featuring the trusted stablecoin USDT and digital asset tokenization services.
* Tether Power: Eco-friendly energy solutions for Bitcoin mining.
* Tether Data: Cutting-edge data sharing and AI solutions like KEET.
* Tether Education: Digital learning initiatives for individuals in the digital and gig economies.
* Tether Evolution: Innovating at the intersection of technology and human potential.
Why Join Us?
Our remote, global team is passionate about fintech innovation. If you excel in English communication and want to contribute to a groundbreaking platform, Tether is the place for you.
About the job:
As part of our AI model team, you will develop evaluation frameworks and benchmark methodologies for pre-training, post-training, and inference of AI models. Your focus will be on designing metrics and assessment strategies to ensure models are responsive, efficient, and reliable across various applications and hardware environments.
Qualifications include expertise in advanced model architectures, evaluation practices, and hands-on development of evaluation pipelines and performance dashboards. Collaboration with cross-functional teams to share findings and integrate feedback is essential. Your work will set industry standards for AI model quality and reliability, delivering tangible value in real-world scenarios.
Responsibilities:
* Develop and deploy evaluation frameworks assessing models at all stages, tracking key performance indicators such as accuracy, latency, throughput, and memory footprint.
* Create evaluation datasets and benchmarks to measure model robustness and improvements.
* Collaborate with product, engineering, and data teams to align evaluation metrics with business goals, presenting insights through dashboards and reports.
* Analyze evaluation data to identify bottlenecks, propose optimizations, and enhance model performance and scalability.
* Refine evaluation methodologies through experiments and research, staying updated on emerging techniques.
Minimum Requirements:
* Degree in Computer Science or related field; PhD in NLP, Machine Learning, or similar is preferred.
* Proven experience in designing and evaluating AI models across different lifecycle stages.
* Strong programming skills and experience with evaluation benchmarks, pipelines, and performance metrics.
* Ability to conduct iterative experiments and stay current with emerging trends.
* Experience working with cross-functional teams and translating technical insights into actionable recommendations.
#J-18808-Ljbffr