Senior Data Engineer - Greenfield Data Platform

solvace United Kingdom
Remote
Apply
AI Summary

Design and build a greenfield data platform from scratch, focusing on data engineering, architecture, and tooling. Develop a strong data foundation to support AI capabilities and event streaming architecture. Collaborate with the platform engineering team to integrate with existing systems.

Key Highlights
Build a greenfield data platform from scratch
Design and develop a strong data foundation for AI capabilities
Collaborate with the platform engineering team to integrate with existing systems
Key Responsibilities
Design and build a greenfield data platform from scratch
Develop a strong data foundation to support AI capabilities
Collaborate with the platform engineering team to integrate with existing systems
Design and implement a Kafka-based event backbone
Build and maintain pipelines into a medallion architecture (bronze/silver/gold layers)
Technical Skills Required
Python SQL (SQL Server + PostgreSQL) Databricks / Apache Spark Apache Kafka ETL/ELT pipeline design Delta Lake AWS data services (RDS, S3, Lambda, Glue) Linux / CLI proficiency
Benefits & Perks
Fully remote work
Direct AI impact
Event streaming architecture
Manufacturing AI opportunity
Nice to Have
Apache Flink
Apache Iceberg
Go or Rust
Multi-tenant data architectures
Terraform / IaC
dbt or similar transformation framework
Manufacturing / industrial data domain experience
CDC (Change Data Capture)
Weaviate or other vector database experience

Job Description


The Opportunity - (Fully Remote, Perm role)


This is a greenfield data platform build. Solvace is in the middle of a major AI transformation — a multi‐agent copilot platform (KAI) is already live in production, serving manufacturing clients globally. The AI roadmap is ambitious (agentic AI capabilities, multi‐agent orchestration, voice interfaces), and the next wave of capabilities depends on a strong data engineering foundation. A Databricks lakehouse POC is being evaluated (Unity Catalog, Delta Sharing, PrivateLink to RDS), and the team is assessing the right long‐term analytics platform strategy. This role builds the entire data engineering function from scratch — defining the architecture, tooling, and patterns that power the next generation of AI capabilities.


This is an entirely forward‐looking role. While there is an active platform modernisation underway (the engineering team is migrating a .NET platform from SQL Server to PostgreSQL and containerising for Kubernetes), this role’s primary focus is building new data pipelines and analytics infrastructure, not maintaining legacy systems. That said, collaboration with the platform engineering team is important, as the data pipelines need to ingest from both the existing SQL Server estate and the new PostgreSQL databases.


Core Technical Requirements

• Python — primary language for data pipelines, scripting, and integration with the AI stack. This is a Python‐first team

• SQL (SQL Server + PostgreSQL) — deep experience required across both engines. The platform runs a large‐scale multi‐tenant database estate with per‐client schemas (local, global, corporate). The new platform targets Aurora PostgreSQL, with several modules already migrated. Pipelines must bridge both.

• Databricks / Apache Spark — a lakehouse POC is being evaluated with Unity Catalog and Delta Sharing via AWS PrivateLink. Experience with Databricks or equivalent analytics platforms is important for assessing and scaling the right approach

• Apache Kafka — event streaming experience is essential. The platform is evaluating Kafka as the strategic backbone for ordered event logs, consumer group replay, and guaranteed partition ordering as the microservices fleet scales

• ETL/ELT pipeline design — building and maintaining pipelines into a medallion architecture (bronze/silver/gold layers) on Databricks or equivalent analytics platform

• Delta Lake — understanding of lakehouse table formats, time travel, and ACID transaction patterns

• AWS data services — RDS, S3, Lambda, Glue or equivalent. The platform runs entirely on AWS • Linux / CLI proficiency — must be comfortable working



AI-Assisted Development Methodology ->


Solvace is transitioning towards AI‐assisted development as a core engineering practice. Candidates should demonstrate:

• Hands‐on experience with AI coding tools — Claude Code, OpenAI Codex, GitHub Copilot, Cursor, or similar. We’re looking for engineers who have integrated these tools into their professional workflow, not just experimented casually

• Spec‐driven development — ability to write clear technical specifications that can be used to drive both human and AI‐assisted implementation, with strong evaluation criteria and test coverage

• Portfolio evidence — professional projects or side projects that demonstrate AI‐assisted development practices. Contributions to or experimentation with emerging projects like OpenClaw are a strong signal

• Testing and evaluation rigour — experience building robust test suites, automated quality gates, and evaluation frameworks that ensure AI‐assisted code meets production standards


Nice-to-Have ->

• Apache Flink — for real‐time stream processing as the platform moves from batch to streaming analytics

• Apache Iceberg — experience with open table formats alongside or as an alternative to Delta Lake

• Go or Rust — valued as evidence of strong backend engineering and systems thinking, even though Python is the primary language

• Multi‐tenant data architectures — the platform runs per‐client schemas with tenant isolation requirements. Understanding tenant isolation patterns is a significant advantage

• Terraform / IaC — infrastructure is managed via Terraform; the Databricks POC has been codified in Terraform

• dbt or similar transformation framework

• Manufacturing / industrial data domain experience (sensor data, quality metrics, OEE, SPC)

• CDC (Change Data Capture) — the platform is building CDC pipelines for the SQL Server to PostgreSQL transition

• Weaviate or other vector database experience — currently used for semantic search in the AI agent layer


Why Join?

• Greenfield data platform — the entire data engineering practice needs to be built from scratch. A lakehouse POC is being evaluated but the production architecture is theirs to define — patterns, tooling, and platform decisions from day one

• Direct AI impact — every pipeline they build directly unblocks AI agent capabilities. The KAI copilot is live in production and evolving towards agentic AI — autonomous agents that take actions, orchestrate workflows, and operate across systems. This role provides the data foundation that makes agentic capabilities possible

• Event streaming architecture — opportunity to design and implement a Kafka‐based event backbone, shaping the platform’s real‐time data architecture

• Manufacturing AI — opportunity to work at the intersection of industrial operations and generative AI, a domain with massive untapped potential. Rich, real‐world data: quality inspections, KPIs, maintenance records, safety observations, process parameters across global manufacturing sites

• Small team, high autonomy — the Innovation Hub operates with startup‐level autonomy inside a funded, growing company

• Leadership trajectory — the data function is critical to the platform and will grow. As an early hire, there is an open career path that will grow with the function — earned, not guaranteed


Similar Jobs

Explore other opportunities that match your interests

Data Engineer

Data Science
1d ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Associate

Levick Stanley

United Kingdom

Senior Data Engineer

Data Science
2d ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

revtech

United Kingdom

Head of People

Data Science
2d ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

givedirectly

United Kingdom

Subscribe our newsletter

New Things Will Always Update Regularly