Senior Data Engineer (Cloud Environment)

Hays Poland
Remote
Apply
AI Summary

Design, build, and maintain scalable data pipelines and analytics solutions in a cloud environment. Work with large-scale data systems using Azure Data Factory, Databricks, and enterprise databases. Collaborate with DevOps teams to implement CI/CD pipelines.

Key Highlights
Design and maintain scalable data pipelines and analytics solutions
Work with large-scale data systems using Azure Data Factory and Databricks
Collaborate with DevOps teams to implement CI/CD pipelines
Technical Skills Required
Azure Data Factory Databricks Kubernetes SQL Server Oracle Database Python SQL Java Scala
Benefits & Perks
Up to 175 PLN/hour
Fully remote work
6-month contract with possible extension

Job Description


Hays IT Contracting is a cooperation based on B2B rules. We connect IT specialists with the most interesting, technological projects on the market.


Join the group of 500 satisfied Contractors working for Hays’ clients!


For our Client we are currently looking for Candidates for the position of:

Senior Data Engineer


Location: Fully remote

Rate: Up to 175 PLN/hour

Start Date: Candidates with up to 1-month notice period

Contract Duration: 6 months (with possible extension)

Working Hours: Standard business hours

Language Requirement: English – C1


Role Overview

We are looking for an experienced Senior Data Engineer to design, build, and maintain scalable data pipelines and analytics solutions in a cloud environment. The role involves working with large-scale data systems using Azure Data Factory, Databricks, Kubernetes, and enterprise databases such as SQL Server and Oracle.


Key Responsibilities

  • Develop and maintain efficient, scalable data pipelines using Azure Data Factory and Databricks.
  • Design integration solutions based on microservices architecture deployed on Kubernetes.
  • Automate data ingestion, transformation, and processing workflows.
  • Manage and optimize Microsoft SQL Server and Oracle Database structures, including query tuning and stored procedures.
  • Collaborate with DevOps teams to implement CI/CD pipelines for data solutions.
  • Monitor and troubleshoot data workflows to ensure reliability and performance.
  • Document system architecture, processes, and provide regular progress reports.


Technical Requirements

  • 5+ years of experience with Azure Data Factory and Databricks.
  • Strong background in microservices architecture and Kubernetes deployments.
  • Advanced knowledge of SQL, Microsoft SQL Server, and Oracle Database.
  • Hands-on experience with cloud platforms (preferably Azure) and CI/CD practices.
  • Proficiency in Python, SQL (Java or Scala is a plus).


Qualifications & Competencies

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
  • Minimum 8 years of overall experience in data engineering.
  • Relevant certifications in Azure, Databricks, or Kubernetes are highly desirable.
  • Strong analytical skills, excellent communication, and ability to work independently.


Preferred Skills

  • Experience with Big Data technologies (Apache Spark, Hadoop, Kafka).
  • Knowledge of data governance, security, and compliance frameworks.
  • Familiarity with Agile methodologies and tools (Jira, Confluence, Azure DevOps).


Hays Poland sp. z o.o. is an employment agency registered in a registry kept by Marshal of the Mazowieckie Voivodeship under the number 361.


Subscribe our newsletter

New Things Will Always Update Regularly