AI Research Engineer
At the forefront of digital finance, we're pioneering a global revolution. Our solutions empower businesses to seamlessly integrate reserve-backed tokens across blockchains. We enable you to store, send, and receive digital tokens instantly, securely, and globally, at a fraction of the cost, with transparency at our core.
Innovate with us!
Our technologies advance AI and peer-to-peer innovation with solutions like KEET. Our expertise is in digital tokenization services, eco-friendly Bitcoin mining, and accessible digital learning for individuals in the gig economies. If you excel in English communication and want to contribute to a groundbreaking platform, this is your place.
Are you ready to be part of the future?
As part of our AI model team, you will develop architecture for models of various scales, improving intelligence, efficiency, and capabilities. You should have expertise in LLM architectures, pre-training optimization, and a research-driven approach to innovate and resolve pre-training bottlenecks.
Responsibilities:
- Pre-train AI models on large, distributed servers with NVIDIA GPUs.
- Design and prototype scalable architectures.
- Experiment, analyze, and optimize methodologies.
- Improve model efficiency and computational performance.
- Advance training systems for scalability and efficiency.
Minimum requirements:
- Degree in Computer Science or related field; PhD preferred in NLP, Machine Learning, or related areas, with a strong research record.
- Experience with large-scale LLM training on distributed GPU servers.
- Familiarity with distributed training frameworks and tools.
- Deep knowledge of transformer and non-transformer models.
- Expertise in PyTorch and Hugging Face libraries for model development and deployment.
Become a driving force behind AI innovation!