Senior Data Engineer Job Summary
We are seeking an experienced Senior Data Engineer to join our team. This role will be responsible for maintaining and developing data pipelines using Python, SQL, DBT, and other data technologies.
The ideal candidate will have a strong background in data engineering and experience with big data environments like Hive, Spark, and Presto. They will also have hands-on experience with workflow management solutions like Airflow and data transformations tools like DBT.
Job Responsibilities
* Maintain and develop data pipelines required for the extraction, transformation, cleaning, pre-processing, aggregation, and loading of data from a wide variety of data sources.
* Design, implement, test, and maintain data pipelines/new features based on stakeholders' requirements.
* Develop and maintain scalable, available, quality-assured analytical building blocks/datasets by close coordination with data analysts.
* Optimize and maintain workflows/scripts on present data warehouses and present ETL.
* Design, develop, and maintain components of data processing frameworks.
* Build and maintain data quality and durability tracking mechanisms to provide visibility into and address inevitable changes in data ingestion, processing, and storage.
* Collaborate with stakeholders to define data requirements and objectives.
* Translate technical designs into business-appropriate representations and analyze business needs and requirements ensuring implementation of data services directly correlates to the strategy and growth of the business.
* Address questions from downstream data consumers through appropriate channels.
* Create data tools for analytics and BI teams that assist them in building and optimizing our product into an innovative industry leader.
* Stay up-to-date with data engineering best practices, patterns, evaluate and analyze new technologies, capabilities, open-source software in context of our data strategy to ensure we are adapting our own core technologies to stay ahead of the industry.
* Contribute to Analytics engineering process.
Required Skills and Qualifications
* 5+ years relevant work experience
* Bachelor's degree in Data Science, Computer Science, Statistics, Mathematics, or a related field
* Built processes supporting data transformation, data structures, metadata, dependency, data quality, and workload management
* Experience with Snowflake, hands-on experience with Snowflake utilities, Snow SQL, Snow Pipe
* Must have worked on Snowflake Cost optimization scenarios
* Solid programming skills, able to write modular, maintainable code, preferably Python & SQL
* Experience with workflow management solutions like Airflow
* Experience on Data transformations tools like DBT
* Experience working with Git
* Experience working with big data environment, like Hive, Spark, and Presto
Benefits
We offer a competitive salary and benefits package, including health insurance, 401(k) matching, and paid time off.
Others
We are an equal opportunities employer and welcome applications from diverse candidates.