Senior Data Engineer (Azure Data Lake Storage, Azure Databricks, Azure Synapse)

Remote
Apply
AI Summary

Design, build, and implement batch and real-time data pipelines. Work with stakeholders to refine data requirements. Increase data pipeline speed and quality.

Key Highlights
Design and implement data pipelines
Work with stakeholders to refine data requirements
Increase data pipeline speed and quality
Technical Skills Required
Python Scala Spark Azure Data Lake Storage Azure Databricks Azure Synapse Delta Lakes Azure Event Hub Function App C# SQL Server Oracle Cassandra MongoDB CosmosDB HBase Git Maven Nexus
Benefits & Perks
Market rate salary
100% remote work
1-year contract

Job Description


SENIOR DATA ENGINEER


Our Ontario based client is seeking a Senior Data Engineer to join their team.


RESPONSIBILITIES:

  • Design, build and implement batch and real-time data pipelines.
  • Develop and propose design patterns and ensure the proposed design, optimally addresses access and query patterns; data consumption and adheres to internal architecture standards.
  • Work with various stakeholders across the business, data scientists and IT.
  • Refine data requirements to meet various data and analytics initiatives and data consumption requirements.
  • Increase the overall speed in which data is onboarded to the Data and Analytics platform.
  • Build robust data pipelines to enable larger data consumption
  • Increase the overall quality of data pipeline development through DevSecOps


TECHNICAL SKILLS:

  • Programming experience in Spark using modern languages such as Python, Scala
  • Experience working with modern data architectures like Azure Data Lake Storage, Azure Databricks, Azure Synapse and Delta Lakes
  • Experience working with Integration patterns and technologies such as Azure Event Hub, Function App and C#
  • Knowledge and expertise of database modeling techniques: Data Vaults, Star, Snowflake and 3NF
  • Experience working with streaming data architecture and technologies for real-time: Spark Streaming, Kafka, Flink, Storm
  • Experience working with relational and non-relational database technologies: SQL Server, Oracle, Cassandra, MongoDB, CosmosDB, HBase
  • Experience working with source code and configuration management environments such as Azure DevOps, Git, Maven, Nexus


*must have Enhanced Reliability Security Clearance (MANDATORY)


1 year contract

100% Remote

Market rate based on 37.5 hours/week


If you’re interested, please reply with a copy of your updated CV.

I will be in contact quickly to discuss additional info and next steps.


I hope to hear from you soon.


Subscribe our newsletter

New Things Will Always Update Regularly