Design and build high-performance data infrastructure, architect data pipelines, and develop scalable data services on GCP. Collaborate with cross-functional teams to deliver key initiatives. Mentor peers and promote technical excellence.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
Senior Data Engineer
Remote (US Only) - MUST BE A US citizen or Green Card Holder.
$130K–$180K
🌎 Remote policy & eligibility:
This role is fully remote, but you must reside in one of these US states:
Arizona, California, Colorado, Connecticut, Florida, Georgia, Illinois, Louisiana, Massachusetts, Michigan, Minnesota, New York, New Jersey, North Carolina, Pennsylvania, Texas, Vermont, Virginia, Washington
We’re supporting a fast-growing global SaaS company that powers connections across billions of digital interactions every month. Their data platform handles over 10 billion events monthly, serving millions of users worldwide — and they’re now scaling the backbone of it all: their Data Platform team.
This is a rare opportunity to join a lean, senior team (5 engineers) reporting directly to the VP of Data. You’ll help design and build the next-generation data platform that powers analytics, experimentation, and AI-driven features used at scale.
The Role
As a Senior Data Engineer, you’ll architect and evolve high-performance data infrastructure that ingests, transforms, and delivers data for analytics and product experiences. You’ll work across a wide variety of data sources — from real-time streams to event-driven pipelines — helping shape the systems that power billions of customer interactions.
You’ll take ownership across the full lifecycle: from ingestion to governance, quality, and security.
You’ll be responsible for:
- Designing, building, and optimizing data pipelines on GCP (Pub/Sub, Dataflow, BigQuery, etc.)
- Developing scalable data services and APIs for analytics and AI-centric workloads
- Maintaining high standards for data quality, observability, and governance
- Collaborating cross-functionally with product, data science, and engineering teams to deliver key initiatives
- Championing best practices around cost-efficiency, scalability, and performance
- Mentoring peers and promoting technical excellence within the team
You bring:
- 4–5+ years’ hands-on experience in Data Engineering, preferably in smaller, fast-paced environments where you’ve worn multiple hats
- Deep expertise with Google Cloud Platform (BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc.)
- Strong proficiency in SQL and Python
- Proven experience with high-volume data (millions of rows or more)
- Solid understanding of modern data tooling (Airflow, dbt, Beam/Spark)
- Familiarity with containerized deployments (Docker/Kubernetes) and CI/CD practices
Why this role
You’ll be part of a small but mighty data team trusted to solve complex engineering challenges at serious scale. This is the kind of environment where your work directly impacts billions of events, and your voice actually shapes the roadmap.