Senior Data Engineer (Cybersecurity Focus)
Join a fast-paced cybersecurity organization as a Senior Data Engineer to build large-scale data systems for threat detection, correlation, and automated remediation. 8-10+ years of Data Engineering experience required. Strong Python and AWS experience necessary.
Key Highlights
Key Responsibilities
Technical Skills Required
Benefits & Perks
Nice to Have
Job Description
🚀 We’re Hiring: Senior Data Engineer (Cybersecurity Focus)
📍 New York City / Plano, TX (3 days onsite, 2 days remote)
🔁 12-month contract (likely extensions)
📦 Open to relocation candidates
We’re looking for a Senior Data Engineer to join a fast-paced cybersecurity organization building large-scale data systems that power threat detection, correlation, and automated remediation — including GenAI-driven security tools used by both operations teams and leadership.
This is a data engineering–first role (~80% DE / ~20% ML exposure). Strong engineering fundamentals matter far more than tool familiarity.
- Build and maintain large-scale, high-volume data pipelines
- Design robust ETL/ELT workflows using PySpark, Spark, and Databricks
- Develop and optimize data systems on AWS (S3, Glue, EMR, Lambda, etc.)
- Work with streaming & near real-time data (Kafka, APIs, micro-batching)
- Support cybersecurity analytics, log processing, and threat detection use cases
- Ensure pipelines are scalable, reliable, and production-grade
- Collaborate closely with data science, cybersecurity, and platform teams
Looking to advance your Data Science career with relocation support? Explore Data Science Jobs with Relocation Packages that include comprehensive packages to help you move and settle in your new role.
- 8–10+ years of Data Engineering experience (10+ preferred)
- Strong Python for data engineering (pandas, DataFrames)
- Deep hands-on experience with PySpark / Apache Spark
- Proven work with Databricks
- Strong AWS experience (especially S3 + data services ecosystem)
- Experience designing end-to-end data pipelines at scale
- Solid understanding of data modeling & SCD (Slowly Changing Dimensions)
Discover our full range of relocation jobs with comprehensive support packages to help you relocate and settle in your new location.
- Hands-on or strong exposure to Kafka (consumer-side understanding required)
- API-based integrations
- Experience with high-volume systems (not just light micro-batching)
- Ability to explain tradeoffs between batch vs micro-batch vs streaming architectures
⭐ Nice to have- Java exposure (not mandatory)
- Cybersecurity domain experience
- Experience with high-throughput or real-time data platforms
Similar Jobs
Explore other opportunities that match your interests
Quantitative Data Scientist
Stabile Search
biohub
Business Analyst, Provider Networks & Care Management