Design, build, and maintain robust data pipelines and architectures to support analytics, machine learning, and business intelligence initiatives. Collaborate with data scientists, analysts, and engineering teams to ensure data reliability, scalability, and accessibility. Develop and maintain scalable ETL/ELT pipelines to ingest, transform, and store structured and unstructured data.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
Overview
Our client is seeking a Data Engineer to design, build, and maintain robust data pipelines and architectures that support analytics, machine learning, and business intelligence initiatives. The ideal candidate will be responsible for ensuring data reliability, scalability, and accessibility while collaborating with data scientists, analysts, and engineering teams. This role requires strong technical expertise, problem-solving skills, and experience with modern data tools in cloud-based environments.
Key Responsibilities
- Design, develop, and maintain scalable ETL/ELT pipelines to ingest, transform, and store structured and unstructured data.
- Collaborate with Data Scientists, Analysts, and Product teams to understand data requirements and deliver solutions that support analytics and ML initiatives.
- Ensure data quality, integrity, and consistency across multiple sources and platforms.
- Implement data models, schemas, and storage solutions optimized for performance, scalability, and cost efficiency.
- Monitor, troubleshoot, and optimize data pipelines and workflows for reliability and performance.
- Participate in data governance, documentation, and compliance initiatives to maintain secure and auditable data practices.
- Evaluate and implement modern data engineering tools, frameworks, and best practices to improve existing systems.
- Collaborate with cross-functional teams to support data-driven decision-making and reporting.
Operating Context
- Works within a cloud-based, technology-driven environment with high volumes of data.
- Interacts daily with Data Science, Analytics, Product, and Engineering teams to deliver reliable data solutions.
- Operates in a fast-paced, remote-first environment requiring adaptability and problem-solving skills.
Required Skills
- Strong experience in designing, building, and maintaining data pipelines and ETL/ELT processes.
- Proficiency in SQL and relational database systems (e.g., PostgreSQL, MySQL, Redshift).
- Experience with programming languages such as Python, Java, or Scala for data processing.
- Familiarity with cloud platforms and data services (AWS, GCP, or Azure).
- Knowledge of data warehousing concepts, dimensional modeling, and big data technologies (e.g., Spark, Hadoop).
- Strong problem-solving and analytical skills with attention to data quality and reliability.
- Experience with workflow orchestration tools (e.g., Airflow, Prefect) is a plus.
Preferred Qualifications
- 3+ years of professional experience as a Data Engineer or in a similar role.
- Experience building data pipelines to support machine learning models or AI applications.
- Knowledge of data governance, security, and compliance best practices.
- Familiarity with real-time data streaming frameworks (e.g., Kafka, Kinesis).
- Prior experience in a fully remote or distributed team environment.
Success Metrics
- Reliable and scalable data pipelines delivered on time.
- High data quality, accuracy, and low error rates in ETL processes.
- Efficient performance of queries and data workflows supporting analytics and ML use cases.
- Positive feedback from cross-functional teams on data accessibility and usability.
By applying, you:
Join our candidate network for current and future opportunities with our hiring partners. May receive feedback on your resume and job search approach. When we see a live opportunity that matches your background and preferences, we’ll reach out to you directly.
Similar Jobs
Explore other opportunities that match your interests
Lensa
Wiraa