Job Opportunity
Key Responsibilities:
* Develop, maintain and optimize data pipelines for seamless data collection, cleaning and transformation.
* Contribute to the design and implementation of internal data-science tools, ensuring their reliability and efficiency.
* Collaborate on developing CI/CD workflows, testing, logging and monitoring to ensure high-quality production code.
* Write scalable Python code that meets business requirements and standards.
* Provide support for deployments and system performance monitoring.
* Work on proof-of-concept projects and explore innovative data-driven solutions.
Requirements:
* A minimum of 5 years of experience in software or data engineering.
* Strong grasp of software engineering and DevOps best practices (e.g., version control, APIs, containerization).
* Experience with cloud platforms, such as Amazon Web Services.
* Proficiency in Python and popular machine learning/data science libraries.
* Familiarity with machine learning frameworks like scikit-learn, TensorFlow, PyTorch.
* Willingness to learn and apply unit and/or integration testing practices.
* Beneficial to have knowledge or interest in Generative AI, including large language models and vector databases.