Senior Data Engineer

Stott and May • United State
Remote
Apply
AI Summary

Design and build scalable data pipelines, support cloud data warehouse architecture, and optimize data quality.

Key Highlights
6-month contract
Full remote work
Pay range $70-$80 W2, $80-$90 LLC
Key Responsibilities
Design and develop scalable ETL/ELT pipelines to ingest data from APIs, SaaS platforms, and internal systems
Build and maintain data workflows for batch and near-real-time processing
Develop data transformation logic using SQL and Python within the warehouse environment
Optimize data warehouse performance, storage design, and query efficiency
Implement data quality checks and monitoring across ingestion and transformation layers
Collaborate with analytics teams to ensure datasets are reliable and properly structured for downstream use
Technical Skills Required
Python SQL Snowflake BigQuery Airflow Spark PySpark
Benefits & Perks
Remote work
Flexible pay range
Nice to Have
Experience working in high-volume data environments
Familiarity with dbt for transformation workflows
Exposure to machine learning data pipelines

Job Description


Overview

We are looking for a Data Engineer to design and build scalable data pipelines while supporting the architecture and optimization of a modern cloud data warehouse. This role will focus on ingesting data from multiple sources, transforming it into reliable datasets, and ensuring it is efficiently structured for downstream analytics and reporting.


This is a 6 month, full remote, rolling contract.

This role is paying $70-$80 W2 $80-$90 LLC


Key Responsibilities

  • Design and develop scalable ETL/ELT pipelines to ingest data from APIs, SaaS platforms, and internal systems
  • Build and maintain data workflows for batch and near-real-time processing
  • Develop data transformation logic using SQL and Python within the warehouse environment
  • Optimize data warehouse performance, storage design, and query efficiency
  • Implement data quality checks and monitoring across ingestion and transformation layers
  • Collaborate with analytics teams to ensure datasets are reliable and properly structured for downstream use
  • Maintain documentation and best practices for pipeline development and warehouse modeling


Required Experience

  • Strong experience building data pipelines using Python and SQL
  • Hands-on experience with modern cloud data warehouses (Snowflake, BigQuery, or Redshift)
  • Experience with workflow orchestration tools such as Airflow or Prefect
  • Experience processing large datasets using Spark or distributed data frameworks
  • Strong understanding of data modeling and warehouse design principles
  • Experience integrating data from APIs, databases, and SaaS applications


Nice to Have

  • Experience working in high-volume data environments
  • Familiarity with dbt for transformation workflows
  • Exposure to machine learning data pipelines


Tech Stack Example

  • Python
  • SQL
  • Snowflake / BigQuery
  • Airflow
  • Spark / PySpark
  • Cloud (AWS or GCP)


Similar Jobs

Explore other opportunities that match your interests

Senior Data Engineer

Data Science
•
13h ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

Insight Global

United State

Senior Data Scientist

Data Science
•
13h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

adelaide

United State
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

Jobs via Dice

United State

Subscribe our newsletter

New Things Will Always Update Regularly