Social network you want to login/join with:
Location: Dublin, Ireland
Job Category: Internet
EU work permit required: Yes
Job Reference: wrcwcnao
Job Views: 1
Posted: 14.07.2025
Expiry Date: 28.08.2025
Job Description:
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our solutions enable businesses to seamlessly integrate reserve-backed tokens across blockchains, harnessing blockchain technology to store, send, and receive digital tokens securely and instantly worldwide. Transparency and trust are fundamental to our approach.
Innovate with Tether
Tether Finance: Our product suite includes the trusted stablecoin USDT and digital asset tokenization services.
Tether Power: Focused on sustainable growth, optimizing excess power for eco-friendly Bitcoin mining.
Tether Data: Supporting AI and peer-to-peer tech with solutions like KEET for secure data sharing.
Tether Education: Providing accessible digital learning for individuals in the digital and gig economies.
Tether Evolution: Merging technology with human potential to push innovation boundaries.
Why Join Us?
Our global team works remotely, passionate about fintech innovation. Join us to collaborate with top talent, set new industry standards, and contribute to a pioneering platform. Strong English communication skills are essential.
Are you ready to be part of the future?
About the job:
As part of our AI model team, you will innovate in model serving and inference architectures for advanced AI systems, focusing on optimizing deployment and inference strategies for scalable, efficient performance across diverse applications and hardware environments.
Your responsibilities include designing, testing, and implementing inference pipelines, establishing performance metrics, and resolving bottlenecks to enable high-throughput, low-latency AI systems.
Responsibilities:
* Design and deploy high-performance model serving architectures optimized for various environments, including resource-constrained devices.
* Set and track performance metrics like latency, throughput, and memory usage.
* Build and monitor inference tests, analyze results, and optimize accordingly.
* Prepare datasets and simulation scenarios for real-world deployment challenges.
* Identify and resolve bottlenecks in serving pipelines, ensuring scalability and reliability.
* Collaborate with teams to integrate optimized frameworks into production, defining success metrics and continuously improving performance.
Qualifications include a degree in Computer Science or related fields, preferably a PhD in NLP, Machine Learning, or similar, with proven experience in inference optimization, kernel development for mobile devices, and expertise in model serving frameworks.
#J-18808-Ljbffr