Seeking a Lead Data Engineer to build and scale data pipelines and architectures in a fast-paced environment. Responsibilities include developing ETL/ELT processes, managing Snowflake data warehouses, and ensuring data quality. Requires strong SQL, Python/Scala/Java, Snowflake, and cloud platform experience.
Key Highlights
Key Responsibilities
Technical Skills Required
Benefits & Perks
Nice to Have
Job Description
We’re looking for a Lead Data Engineer to join a dynamic team working at the intersection of data, analytics, and trend intelligence in a fast-moving environment. In this role, you’ll be responsible for building and scaling data pipelines, enabling analytics and data science teams, and contributing to the development of modern data products.
- Design, build, and maintain scalable data pipelines and data architectures
- Develop and optimize ETL and ELT processes to integrate data from multiple sources
- Build and manage data warehouse solutions based on Snowflake
- Ensure data quality, integrity, and governance across the data platform
- Work closely with data scientists and analysts to support data driven products and insights
- Improve data processing workflows and optimize pipeline performance
- Design and implement data services and APIs that enable efficient data access across systems
- Continuously assess and introduce tools that strengthen the data engineering ecosystem
- At Lead level, help define technical direction, standards, and best practices across the data engineering area
Interested in remote work opportunities in Data Science? Discover Data Science Remote Jobs featuring exclusive positions from top companies that offer flexible work arrangements.
- Experience working as a Data Engineer, Senior Data Engineer, or in a similar role
- Very good knowledge of SQL
- Hands on experience with Snowflake or a similar modern data warehouse platform
- Experience building ETL and ELT pipelines
- Good command of Python or another programming language such as Scala or Java
- Experience with large scale data processing frameworks such as Apache Spark or Apache Flink
- Experience working with cloud platforms, preferably AWS; Azure or GCP are also welcome
- Good understanding of data modelling and data warehousing concepts
- Ability to translate business needs into scalable technical solutions
- Readiness to support technical decisions and contribute to data engineering standards
Browse our curated collection of remote jobs across all categories and industries, featuring positions from top companies worldwide.
- Experience with Databricks or other distributed compute platforms
- Experience designing data APIs or microservices
- Knowledge of data governance and data quality frameworks
- Experience working in cross functional data teams
- 100% remote work
- Long term, stable cooperation
- Private healthcare and Multisport
- Real impact on the technical roadmap and engineering standards
- Opportunity to work on challenging production systems operating at meaningful scale
Similar Jobs
Explore other opportunities that match your interests
Mid-Senior Data Engineer (Azure Databricks)
Miratech
peakdata