Senior Data Engineer

Remote
Apply
AI Summary

Design and operate scalable data pipelines, integrate network telemetry sources, and build reliable analytics backends for enterprise customers. Collaborate with network engineers and SREs to onboard telemetry sources and ensure data quality. Troubleshoot production incidents and perform performance tuning.

Key Highlights
Design and operate scalable data pipelines
Integrate network telemetry sources
Build reliable analytics backends
Key Responsibilities
Design, implement, and maintain scalable ETL/streaming pipelines
Build and optimize data models, tables, and queries
Collaborate with network engineers and SREs
Technical Skills Required
Python SQL Apache Kafka Apache Spark AWS Linux
Benefits & Perks
Fully remote, US-based role
Competitive compensation
Health benefits
Annual professional development budget
Nice to Have
NetFlow
SNMP
Terraform

Job Description


Primary Title: Data Engineer

About The Opportunity

A remote, US-focused role within the cloud infrastructure and enterprise networking analytics sector. The team builds scalable data platforms that ingest, process, and analyze large volumes of telemetry and operational data to power observability, security, and performance use-cases for enterprise customers.

We are hiring a hands-on Data Engineer who will design and operate data pipelines, integrate network telemetry sources, and build reliable analytics backends that support real-time and batch workloads.

Role & Responsibilities

  • Design, implement, and maintain scalable ETL/streaming pipelines to ingest network telemetry and application logs (batch and real-time).
  • Build and optimize data models, tables, and queries for analytics and ML feature stores to support observability and security use-cases.
  • Integrate message brokers and streaming systems (Kafka) and ensure high-throughput, low-latency processing with Spark or similar engines.
  • Collaborate with network engineers and SREs to onboard telemetry sources (NetFlow/sFlow, SNMP, syslog) and ensure data quality and schema evolution.
  • Implement CI/CD for data pipelines, automate deployments, and maintain monitoring, alerting, and cost controls in cloud environments.
  • Troubleshoot production incidents, perform performance tuning, and document runbooks and data contracts.

Skills & Qualifications

Must-Have

  • Python
  • SQL
  • Apache Kafka
  • Apache Spark
  • AWS
  • Linux

Preferred

  • NetFlow
  • SNMP
  • Terraform

Additional Qualifications

  • 5+ years of professional experience building data pipelines, analytics platforms, or network telemetry solutions.
  • Proven experience operating production streaming and batch systems in a cloud environment (AWS preferred).
  • Bachelor's degree in Computer Science, Engineering, or equivalent practical experience.

Benefits & Culture Highlights

  • Fully remote, US-based role with flexible hours and asynchronous collaboration.
  • Competitive compensation, health benefits, and annual professional development budget.
  • High-autonomy engineering culture focused on ownership, learning, and measurable impact.

Join a fast-moving team that turns network and operational data into actionable insights and reliable services—ideal for engineers who enjoy building robust data infrastructure and working cross-functionally with networking and SRE teams.

Skills: aws,sql,apache kafka,linux,apache spark,python

Similar Jobs

Explore other opportunities that match your interests

Data Analyst

Data Science
•
6h ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Associate

Wiraa

United State

Senior Clinical Data Scientist

Data Science
•
6h ago
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Mid-Senior level

The Fountain Group

United State

Business Intelligence Analyst (Power BI)

Data Science
•
8h ago
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Mid-Senior level

jmd technologies inc.

United State

Subscribe our newsletter

New Things Will Always Update Regularly