We are seeking a skilled Data Engineer to design, develop, and maintain batch and streaming data pipelines using Databricks (Apache Spark). The ideal candidate will have strong experience with Databricks and Apache Spark, proficiency in Python, and hands-on experience with AWS or Azure cloud services.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
We are currently recruiting a Data Engineer for one of our clients. The role is outside IR35 and is paying £400-500 per day, it will initially be for 6 months. It is also fully remote.
Key Responsibilities
• Design, develop, and maintain batch and streaming data pipelines using Databricks (Apache Spark)
• Build and optimize ETL/ELT workflows for large-scale structured and unstructured data
• Implement Delta Lake architectures (Bronze/Silver/Gold layers)
• Integrate data from multiple sources (databases, APIs, event streams, files)
• Optimize Spark jobs for performance, scalability, and cost
• Manage data quality, validation, and monitoring
• Collaborate with analytics and ML teams to support reporting and model development
• Implement CI/CD, version control, and automated testing for data pipelines
Required Qualifications
• 3+ years of experience as a Data Engineer
• Strong experience with Databricks and Apache Spark
• Proficiency in Python (required); SQL (advanced)
• Hands-on experience with AWS or Azure cloud services:
o AWS: S3, EMR, Glue, Redshift, Lambda, IAM
o Azure: ADLS Gen2, Azure Databricks, Synapse, Data Factory, Key Vault
• Experience with Delta Lake, Parquet, and data modeling
Similar Jobs
Explore other opportunities that match your interests
codevista solution
Decentralized Masters