Overview
Role Purpose:
Support the development and maintenance of software tools that underpin Data Science solutions and operations, using modern technologies (e.g. AWS, Docker, Kubernetes) to contribute to improved business performance.
Collaborate with business stakeholders and engineering teams to gather requirements and contribute to the design and implementation of solutions.
Assist in maintaining and enhancing existing tools/services developed by the Data Science team.
Build CI/CD pipelines and contribute to testing, logging, and monitoring processes under guidance.
Participate in code reviews and follow CarTrawler best practices for clean, maintainable code.
Continuously develop technical skills and support the wider team, working under the mentorship of senior engineers.
Reporting to: Head of MLOps
Responsibilities
Collaborating with team members and stakeholders to understand project requirements and identify areas where support is needed
Staying up to date with developments in Data Science and DevOps to help ensure our tech stack remains current
Contributing to the development of Data Science engineering solutions by:
Supporting data collection, cleaning, and preparation based on identified requirements
Assisting in the implementation of engineering solutions using appropriate tools and technologies
Following best practices for CI/CD, testing, logging, and monitoring under guidance
Writing scalable, production-ready code where appropriate, with support from senior colleagues
Participating in deployments and monitoring the solution impact
Supporting ongoing development and maintenance of internal Data Science tools/platforms, such as:
MVT – Multi-variate testing platform
ACDC – Cloud-based ML deployment tool
Action Factory – Automated decision-making platform
Echo – MLOps pipeline management tool
Various internal Python libraries and utilities
Assisting with BAU tasks and ongoing support of tools and services provided by the Data Science team
Clearly communicating updates and progress to team members in a way that suits both technical and non-technical audiences
Participating in proof-of-concept projects, helping to explore innovative solutions and learning how to analyse and interpret data effectively
Taking ownership of personal development by seeking guidance and feedback from mentors and contributing positively to a collaborative team environment
Qualifications
Undergraduate degree in Computer Science, Engineering, Mathematics, or related technical field, or relevant internship/industry experience
Good understanding of software engineering and DevOps practices (e.g. Jenkins, Github Actions), including Object-Oriented Programming, data structures, version control, REST APIs, and containerisation tools (e.g. Docker, Kubernetes)
Exposure to cloud platforms such as AWS or Azure
Experience with developing GenAI applications, or surrounding tech stack (e.g. Vector DB, NLP, LLM) seen as a plus
Proficient in writing clean, structured Python code; familiarity with common Python libraries used in data science and machine learning
Comfortable using version control systems (e.g. Git) and basic development workflows (commits, branches, pull requests)
Exposure to ML libraries (e.g. scikit-learn; bonus for familiarity with TensorFlow or PyTorch)
Willingness to learn test-driven development and build basic unit and integration tests with support
An interest in learning about production considerations like resource constraints and scalability
Basic SQL skills; exposure to other database types (graph, NoSQL) is a plus
Strong communication skills with a desire to learn how to present technical concepts clearly to varied audiences
A proactive and curious mindset, with a collaborative approach to learning and team engagement
Seniority level
Not Applicable
Employment type
Full-time
Job function
Engineering and Information Technology
Industries
Transportation, Logistics, Supply Chain and Storage
#J-18808-Ljbffr