Senior Data Engineer (Remote)

valzo soft solutions • United State
Remote
Apply
AI Summary

Design and build scalable data pipelines and infrastructure, collaborating with cross-functional teams to deliver trusted data solutions. Develop and maintain data models, metadata, and documentation. Ensure data quality, security, and compliance best practices.

Key Highlights
Design, develop, and maintain ETL/ELT pipelines
Build and optimize data warehouses, data lakes, and distributed data systems
Collaborate with data analysts, data scientists, and engineering teams
Develop and manage data models, metadata, and documentation
Implement data quality, validation, and monitoring frameworks
Technical Skills Required
SQL Python Scala AWS Azure GCP Airflow dbt Snowflake BigQuery Redshift Databricks Spark Kafka
Benefits & Perks
Remote work
Full-time employment

Job Description


Job Title: Data Engineer (Remote)

Overview

We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. The ideal candidate has strong experience with modern data technologies, cloud platforms, and a deep understanding of data modeling and ETL/ELT best practices. This is a fully remote role.

Responsibilities

  • Design, develop, and maintain ETL/ELT pipelines for ingesting and transforming large datasets.
  • Build and optimize data warehouses, data lakes, and distributed data systems.
  • Collaborate with data analysts, data scientists, and engineering teams to deliver trusted data solutions.
  • Develop and manage data models, metadata, and documentation.
  • Implement data quality, validation, and monitoring frameworks.
  • Ensure security, governance, and compliance best practices are applied.
  • Optimize data performance, reliability, and scalability.

Required Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience).
  • 3-10+ years of experience as a Data Engineer or similar role.
  • Strong proficiency in SQL and experience with relational and NoSQL databases.
  • Hands-on experience with Python or Scala for data workflows.
  • Experience working with cloud platforms such as AWS, Azure, or GCP.
  • Familiarity with modern data stack tools (e.g., Airflow, dbt, Snowflake, BigQuery, Redshift, Databricks).
  • Experience with distributed data processing frameworks (e.g., Spark, Kafka).
  • Strong understanding of data modeling and warehouse/lakehouse architecture


Subscribe our newsletter

New Things Will Always Update Regularly