Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, at a fraction of the cost, with transparency at our core.
Innovate with Tether
Tether Finance: Home to the trusted stablecoin USDT and digital asset tokenization services.
Tether Power: Optimizing excess power for eco-friendly Bitcoin mining.
Tether Data: Advancing AI and peer-to-peer tech with solutions like KEET.
Tether Education: Providing accessible digital learning for individuals in the digital and gig economies.
Tether Evolution: Pushing technological and human potential boundaries for a future of seamless innovation.
Why Join Us?
Work remotely with a global team passionate about fintech innovation. Collaborate, push boundaries, and set new industry standards. If you excel in English communication and want to contribute to a groundbreaking platform, Tether is your place.
Are you ready to be part of the future?
About the job:
As part of the AI model team, you will develop architecture for models of various scales, improving intelligence, efficiency, and capabilities.
You should have expertise in LLM architectures, pre-training optimization, and a research-driven approach to innovate and resolve pre-training bottlenecks.
Responsibilities:
* Pre-train AI models on large, distributed servers with NVIDIA GPUs.
* Design and prototype scalable architectures.
* Experiment, analyze, and optimize methodologies.
* Improve model efficiency and computational performance.
* Advance training systems for scalability and efficiency.
Minimum requirements:
* Degree in Computer Science or related field; PhD preferred in NLP, Machine Learning, or related areas, with a strong research record.
* Experience with large-scale LLM training on distributed GPU servers.
* Familiarity with distributed training frameworks and tools.
* Deep knowledge of transformer and non-transformer models.
* Expertise in PyTorch and Hugging Face libraries for model development and deployment.
#J-18808-Ljbffr