Senior Data Engineer (Azure Cloud)

Bayforce • United State
Remote
Apply
AI Summary

Design and implement cloud-native data integration and Lakehouse solutions on Azure. Lead end-to-end data engineering from ingestion to curated Lakehouse/warehouse layers. Drive engineering standards and reusable patterns.

Key Highlights
Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric
Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling
Implement monitoring, logging, alerting, and operational runbooks for production pipelines
Technical Skills Required
Azure Data Factory Synapse Microsoft Fabric PySpark Spark Azure Data Lake OneLake Lakehouse Data Warehouse SQL API-based integrations Power BI Semantic Model Microsoft Purview
Benefits & Perks
Contract employment
Remote work (ET or CT time zones)

Job Description


**NO 3rd Party vendor candidates or sponsorship**


Role Title: Senior Data Engineer

Client: Global construction and development company

Employment Type: Contract

Duration: 1 year

Preferred Location: Remote based in ET or CT time zones


Role Description:

The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering—from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers—while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs.


Key Responsibilities

  • Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric.
  • Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads).
  • Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling.
  • Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices.
  • Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis.
  • Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements.
  • Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets.
  • Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices.


Requirements:

  • Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field.
  • 5+ years of experience in data engineering with strong focus on Azure Cloud.
  • Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support.
  • Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns.
  • Advanced PySpark/Spark experience for complex transformations and performance optimization.
  • Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency).
  • Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation).
  • Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns.


Preferred Skills

  • Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks).
  • Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance.
  • Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing).
  • Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting.
  • Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).


Subscribe our newsletter

New Things Will Always Update Regularly