Software Engineer I - Real-Time Data Streaming (Scala + Spark)

Brooksource β€’ United State
Remote
Apply
AI Summary

Join Brooksource's client's Real-Time Data Streaming team to build and maintain scalable data pipelines. Work with Scala and Spark Structured Streaming to ensure data accuracy and reliability. 1-2 years of professional experience required.

Key Highlights
Maintain and enhance Spark-based streaming applications
Support BAU tasks and troubleshoot Spark Structured Streaming jobs
Collaborate with stakeholders to customize data exports
Technical Skills Required
Scala Spark Structured Streaming AWS (S3, Athena, EC2, IAM, EMR) Apache Kafka Git
Benefits & Perks
Competitive pay rate ($38-$43/hr)
Remote work option with potential conversion to FTE
Opportunity to work with high-impact data engineering environment

Job Description


Software Engineer I – Real-Time Data Streaming (Scala + Spark)

Type: W2 long-term contract

Location: United States; Remote (potential conversion may require relocation to client hub)

Client Hubs: Denver, Charlotte, St. Louis, Stamford/NYC, Austin

Onsite Policy if Converted FTE: 4 days onsite / 1 day remote

Start Date: early January

Pay Rate: $38–$43/hr (DOE)


Overview

We are seeking a Software Engineer I to join our client’s Real-Time Data Streaming team. This group builds and maintains scalable data pipelines that ingest, transform, and deliver mission-critical data to internal and external stakeholders. You’ll work across real-time and batch processing systems, ensuring data accuracy, reliability, and integrity across billions of daily events.

This is an excellent role for an engineer with 1–2 years of professional experience, particularly in Scala and ideally Spark Structured Streaming in Scala, who wants to grow within a high-impact data engineering environment.


Key Responsibilities

β€’ Maintain and enhance Spark-based streaming applications.

β€’ Support BAU tasks, including privacy compliance updates and Spark version upgrades.

β€’ Collaborate with stakeholders to customize data exports for internal teams and external partners.

β€’ Troubleshoot, debug, and optimize Spark Structured Streaming jobs for performance and reliability.


Required Skills

β€’ 1–2 years of professional software engineering experience, ideally in data engineering or streaming environments.

β€’ Hands-on Scala experience (functional programming mindset preferred).

β€’ Practical knowledge of AWS services such as S3, Athena, EC2, IAM, and EMR.

β€’ Strong understanding of Apache Kafka (consumer groups, offset commits).

β€’ Experience with Spark Structured Streaming in Scala.

β€’ Proficiency with Git and comfort working in Mac development environments.


Nice-to-Have Skills

β€’ Experience with Terraform, Kubernetes, or general containerization.

β€’ SQL for data analysis or data exploration.

β€’ Exposure to large-scale datasets or distributed data systems.


Subscribe our newsletter

New Things Will Always Update Regularly