Data Infrastructure Engineer

tardis group • Australia
Remote
Apply
AI Summary

Design and operate hybrid data infrastructure, build high-performance ingestion pipelines, and implement data governance and observability.

Key Highlights
Design and operate hybrid data infrastructure across on-prem and cloud
Build high-performance ingestion pipelines for real-time and batch market data
Implement strong data governance, observability, and access patterns
Technical Skills Required
Distributed systems Object storage (S3) Columnar formats Streaming tech (Kafka) Linux Networking Performance tuning Cloud (AWS or GCP) Python Go Java Solid SQL Data modelling Kubernetes Docker IaC Airflow Dagster Prometheus Grafana Delta Lake Iceberg Lakehouse architectures
Benefits & Perks
Fully remote potential
Permanent or contract options

Job Description


🚀 Data Infrastructure Engineer | Hybrid (Fully remote potential) | High-Impact Role


Permanent or Contract


About the Opportunity


We’re looking for a Data Infrastructure Engineer to help build and maintain the data platform. If you enjoy solving hard performance problems, designing scalable systems, and working with high-volume data, this role offers a rare chance to make a direct, measurable impact.


What You’ll Be Working On

  • Designing and operating hybrid data infrastructure across on-prem and cloud
  • Building high-performance ingestion pipelines for real-time and batch market data
  • Integrating data from internal systems and external vendors
  • Implementing strong data governance, observability, and access patterns
  • Automating deployments and infrastructure using modern DevOps & IaC tools
  • Supporting critical production systems, including occasional after-hours cover
  • Improving monitoring, alerting, and system reliability end-to-end


What We’re Looking For

  • Degree in Maths, Engineering, Computer Science, or equivalent experience
  • Strong background in distributed systems, object storage (e.g., S3), columnar formats, and streaming tech (Kafka)
  • Linux, networking, and performance tuning expertise
  • Cloud experience (AWS or GCP)
  • Proficiency in Python, Go, or Java
  • Solid SQL + data modelling foundations
  • Kubernetes, Docker, and IaC experience
  • Familiarity with orchestration tools like Airflow or Dagster
  • Clear communicator with a passion for learning and problem-solving
  • Knowledge of time-series/tick data, schema evolution, data versioning
  • Experience in small, agile engineering teams
  • Prometheus/Grafana or other observability tooling
  • Exposure to Delta Lake, Iceberg, or Lakehouse architectures


Subscribe our newsletter

New Things Will Always Update Regularly