Senior Data Engineer (Azure, Databricks, Airflow)

Capitole โ€ข European Union
Relocation
Apply
AI Summary

Design, build, and evolve scalable data pipelines and data products within a Lakehouse architecture on Azure. Contribute to the evolution of the data platform and collaborate with cross-functional teams. Strong expertise in Python, SQL, and Azure is required.

Key Highlights
Design and build scalable data pipelines using Azure Data Factory and Apache Airflow
Develop data transformations in Azure Databricks (PySpark / SQL) following Medallion Architecture
Contribute to the evolution of the data platform and collaborate with cross-functional teams
Key Responsibilities
Design and build scalable, reliable, and reusable data pipelines using Azure Data Factory and Apache Airflow
Develop data transformations in Azure Databricks (PySpark / SQL) following Medallion Architecture
Optimize performance, cost, and reliability of data workloads
Contribute to the evolution of the data platform and collaborate with cross-functional teams
Technical Skills Required
Python SQL Azure Data Factory Apache Airflow Azure Databricks PySpark Delta Lake
Benefits & Perks
โ‚ฌ1200 per year training budget
Flexible compensation model
Private health insurance
Flexible working hours
Hybrid model
Wellhub: fitness, wellness, and mental health support
Nice to Have
Experience with Infrastructure as Code (Terraform, ARM, Bicep)
Experience with CI/CD pipelines (Azure DevOps, GitHub)
Exposure to Data Quality tools (Great Expectations, Soda)
Experience with Data Catalogs (Unity Catalog, DataHub, Atlan, etc.)
Experience with monitoring/logging tools (Grafana, Azure Log Analytics)

Job Description


๐Ÿ”น Senior Data Engineer (Azure, Databricks, Airflow) ๐Ÿ”น



๐ŸŒ We welcome international candidates based in Europe who are open to relocating to Barcelona



๐Ÿ“Œ About the role


We are looking for a Senior Data Engineer to join a modern, cloud-based data platform team within a leading international tech company.

In this role, you will design, build, and evolve scalable data pipelines and data products within a Lakehouse architecture on Azure, leveraging tools such as Databricks, Data Factory, and Airflow.

You will work at the intersection of data engineering and platform evolution, contributing not only to building data products but also to improving the underlying platform โ€” including metadata-driven frameworks, data quality, observability, and governance.

This is a highly hands-on role with strong ownership, where you will collaborate with cross-functional teams and play a key part in shaping how data is consumed across the business.

If you enjoy working in modern data environments, solving complex data challenges, and building reliable and scalable solutions โ€” this could be a great fit.



๐Ÿ’ป What youโ€™ll do


๐Ÿ”น Design and build scalable, reliable, and reusable data pipelines using Azure Data Factory and Apache Airflow

๐Ÿ”น Develop data transformations in Azure Databricks (PySpark / SQL) following Medallion Architecture (Bronze, Silver, Gold layers)

๐Ÿ”น Optimize performance, cost, and reliability of data workloads

๐Ÿ”น Contribute to the evolution of the data platform (metadata-driven orchestration, observability, data quality)

๐Ÿ”น Support the migration of legacy data solutions to modern Lakehouse architecture

๐Ÿ”น Implement and improve data quality frameworks (e.g. Soda, Great Expectations)

๐Ÿ”น Ensure pipelines are observable, testable, and production-ready

๐Ÿ”น Collaborate with Run/Operations teams to troubleshoot incidents and ensure platform stability

๐Ÿ”น Participate in incident management, root cause analysis, and continuous improvement initiatives

๐Ÿ”น Contribute to data cataloging, lineage, and governance (e.g. Unity Catalog)



๐Ÿ’ก Must Have


๐Ÿ”น 4+ years of experience in Data Engineering / Analytics Engineering / BI

๐Ÿ”น Strong expertise in: Python (PySpark); SQL (advanced level)

๐Ÿ”น Hands-on experience with: Azure (Data Factory, cloud data platforms), Databricks & Delta Lake, Apache Airflow

๐Ÿ”น Solid understanding of: Data Lake / Data Warehouse / Lakehouse architectures, Medallion architecture, ETL / ELT design patterns, Metadata-driven approaches

๐Ÿ”น Experience with performance optimization, partitioning, and scalable data pipelines

๐Ÿ”น Understanding of batch and streaming pipelines

๐Ÿ”น Familiarity with DevOps / DataOps practices

๐Ÿ”น Strong problem-solving skills and ownership mindset

๐Ÿ”น Ability to work with both technical and business stakeholders

๐Ÿ”น Fluent English



โœจ Nice to Have


๐Ÿ”น Experience with Infrastructure as Code (Terraform, ARM, Bicep)

๐Ÿ”น Experience with CI/CD pipelines (Azure DevOps, GitHub)

๐Ÿ”น Exposure to Data Quality tools (Great Expectations, Soda)

๐Ÿ”น Experience with Data Catalogs (Unity Catalog, DataHub, Atlan, etc.)

๐Ÿ”น Experience with monitoring/logging tools (Grafana, Azure Log Analytics)

๐Ÿ”น Background in AdTech or digital environments

๐Ÿ”น Experience with BI tools (Power BI, Tableau)



๐Ÿ“ This role is based in Barcelona. Candidates must be willing to relocate.



๐ŸŒ Why join this project?


๐Ÿค People first โ€“ diverse and inclusive culture in an international environment.

๐Ÿš€ Modern cloud platforms and large-scale, global projects.

๐Ÿ˜ High team stability and collaborative culture.

๐ŸŽ“ โ‚ฌ1200 per year training budget and continuous learning opportunities.

๐Ÿ’ฐ Flexible compensation model.

๐Ÿฉบ Private health insurance and benefits package.

โšก Flexible working hours and hybrid model.

๐Ÿ‹๏ธ Wellhub: fitness, wellness, and mental health support.

โšฝ Football and paddle tennis teams sponsored by Capitole.

๐Ÿฅณ Team buildings, global events, and strong tech communities.



โœจ Want to know more about us? Click here and discover all the details.


๐Ÿ” Curious about our culture? Check out what people are saying about us on Glassdoor.


๐Ÿ’ฌ We know that not every candidate will meet 100% of the requirements. If your profile doesnโ€™t match perfectly but you believe you can add value, weโ€™d still love to hear from you!


๐Ÿ‘‰ Ready for the challenge? Apply now and be part of a global team driving cloud innovation and security.


Empowering People, Unlocking Innovation.



Information Security Notice

  • The employee will have access to confidential information related to Capitole and the assigned project.
  • Compliance with internal security and information protection policies is mandatory.
  • NDA signature required.

Similar Jobs

Explore other opportunities that match your interests

Data Scientist (Computer Vision & NLP)

Data Science
โ€ข
6d ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Associate

EyeSpy Recruitment - iGaming S...

European Union

Junior Data Analyst Intern

Data Science
โ€ข
1h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

โ€ขโ€ขโ€ขโ€ขโ€ขโ€ข โ€ขโ€ขโ€ขโ€ขโ€ขโ€ข โ€ขโ€ขโ€ขโ€ขโ€ขโ€ข
Job Type โ€ขโ€ขโ€ขโ€ขโ€ขโ€ข
Experience Level โ€ขโ€ขโ€ขโ€ขโ€ขโ€ข

TD

Canada

Gross-to-Net (GTN) Forecasting Manager

Data Science
โ€ข
1h ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

brilliantยฎ

United State

Subscribe our newsletter

New Things Will Always Update Regularly