Senior Data Engineer

oryxsearch.io Ukraine
Remote
Apply
AI Summary

Design and maintain large-scale data pipelines, warehouses, and models for AI-powered analytics systems. Work with modern streaming tools, cloud data warehouses, and data governance tooling. Contribute to on-call rotation and document systems for continuous improvement.

Key Highlights
Build and maintain real-time and batch data pipelines
Design ELT workflows and develop scalable dimensional data models
Optimize data warehouses for cost and performance
Implement data quality checks, lineage, cataloguing, and PII management
Prepare AI-ready datasets, feature stores, and reproducible ML data workflows
Monitor pipeline performance and troubleshoot issues
Technical Skills Required
Python SQL Kafka Pub/Sub Apache Beam dbt Dataform BigQuery Snowflake Redshift Airflow Dagster Fivetran Airbyte
Benefits & Perks
Flexible, remote-first environment
Competitive compensation
Strong learning, growth, and technical ownership opportunities

Job Description


Senior Data Engineer – Fully Remote


About the Role

We’re building large-scale, AI-powered analytics systems, and we need a Data Engineer to help architect and maintain the data pipelines, warehouse, and models that power our platform. You’ll work across streaming, transformation, and warehouse optimisation to ensure our data is clean, reliable, and always AI-ready.


What You’ll Do

  • Build and maintain real-time and batch data pipelines using modern streaming tools (Kafka, Pub/Sub, Beam, etc.)
  • Design ELT workflows using dbt/Dataform and develop scalable dimensional data models
  • Optimise data warehouses (BigQuery/Snowflake/Redshift) for cost and performance
  • Implement data quality checks, lineage, cataloguing, and PII management
  • Prepare AI-ready datasets, feature stores, and reproducible ML data workflows
  • Monitor pipeline performance, troubleshoot issues, and contribute to on-call rotation
  • Document systems and support continuous improvement across the engineering team

What You Bring

  • 5+ years of experience in production data engineering
  • Expert-level SQL and strong Python
  • Hands-on experience with dbt/Dataform, cloud data warehouses, and streaming technologies
  • Familiarity with Airflow/Dagster, Fivetran/Airbyte, and modern data governance tooling
  • Strong understanding of scalable data architecture, data quality, and cost optimisation

What Success Looks Like

  • Highly reliable pipelines (>99.9% uptime)
  • Fast data freshness (real-time <5 mins, daily by morning SLA)
  • Significant reductions in data cost and quality errors
  • Documented, discoverable, and production-grade data models
  • AI/ML features shipped and used in production systems

Why Join

  • Own end-to-end data systems that power AI-driven products
  • Work with a modern cloud-native stack
  • Strong learning, growth, and technical ownership opportunities
  • Flexible, remote-first environment with competitive compensation


Subscribe our newsletter

New Things Will Always Update Regularly