Senior Data Engineer

Tenth Revolution Group United State
Remote
Apply
AI Summary

We are seeking a Senior Data Engineer to build and maintain data pipelines and transformation models that power analytics, reporting, and business intelligence across the organization. The ideal candidate will have hands-on experience with Snowflake, dbt, and ELT tools. This is a hands-on production role with direct ownership of data models and growing influence over engineering standards.

Key Highlights
Build and maintain data pipelines and transformation models
Hands-on experience with Snowflake, dbt, and ELT tools
Direct ownership of data models and growing influence over engineering standards
Key Responsibilities
Build and maintain ELT pipelines using Fivetran and custom integrations
Ingest data from source systems including ERP platforms, Salesforce, project management tools, and internal databases into Snowflake
Develop, test, and document dbt models that transform raw data into clean, structured datasets
Technical Skills Required
Data Engineering Fivetran Snowflake dbt Advanced SQL Writing AI-Assisted Development
Benefits & Perks
$130,000 – $170,000 + benefits
Remote work
Benefits
Nice to Have
Experience with manufacturing, ERP, supply chain, or project-based data
Familiarity with dimensional modeling (star schemas, fact/dimension tables)

Job Description


Data Engineer

Location: Remote (United States)

Reporting To: Director of Data & Analytics

Compensation Range: $130,000 – $170,000 + benefits


Company Overview

Our client is an award‑winning North American manufacturing organization undergoing a period of rapid, sustained hyper‑growth. With decades of industry leadership, a national manufacturing footprint, and a strong reputation for engineering excellence, the company delivers highly customized, mission‑critical products across sectors such as technology, data centers, energy, healthcare, utilities, and industrial infrastructure.


As the business scales, data has become a strategic driver of operational efficiency, financial insight, and executive decision‑making. To support this growth, the company is continuing to invest in a modern data and analytics platform and is expanding its data engineering team.


Position Summary

The Data Engineer is a core individual contributor on the Data & Analytics team, responsible for building and maintaining the data pipelines and transformation models that power analytics, reporting, and business intelligence across the organization.


Working within a modern data stack—including Fivetran, Snowflake, dbt, Tableau, and AI‑assisted development tools—this role transforms raw source data into reliable, well‑tested, analytics‑ready datasets used by Finance, Operations, Supply Chain, Engineering, and Executive leadership.


This is a hands‑on production role with direct ownership of data models and growing influence over engineering standards as the organization continues to scale.


Must‑Have Skills

Candidates must have demonstrated, hands‑on experience in the following areas:

  • Data Engineering (building and supporting production data pipelines)
  • Fivetran (or equivalent ELT tools) for data ingestion
  • Snowflake (or comparable cloud data warehouses)
  • dbt (data build tool) for data transformation, testing, and documentation
  • Advanced SQL Writing (CTEs, window functions, complex joins, optimization)
  • AI‑Assisted Development
  • Comfortable using AI coding assistants and agent‑based tools to accelerate SQL development, dbt modeling, testing, debugging, and documentation


Key Responsibilities

Data Pipeline Development

  • Build and maintain ELT pipelines using Fivetran and custom integrations
  • Ingest data from source systems including ERP platforms, Salesforce, project management tools, and internal databases into Snowflake

Data Transformation & Modeling

  • Develop, test, and document dbt models that transform raw data into clean, structured datasets
  • Build dimensional models (fact and dimension tables, staging layers) optimized for BI and ad‑hoc analysis

Data Quality & Reliability

  • Write and maintain dbt tests to validate data freshness, accuracy, and consistency
  • Monitor pipelines and investigate data quality issues from detection through root‑cause resolution

Source System Integration

  • Integrate new data sources using APIs and data connectors
  • Troubleshoot ingestion and processing issues to maintain reliable data flows

Documentation & Enablement

  • Maintain clear documentation for pipelines, data models, and business logic
  • Ensure analytics datasets are understandable and reusable across teams

Collaboration & Stakeholder Partnership

  • Work closely with business stakeholders to understand data needs
  • Translate analytical questions into scalable, well‑modeled datasets

Performance & Optimization

  • Monitor query performance and warehouse utilization
  • Optimize transformation models and pipelines for cost and scalability

Engineering Standards

  • Participate in code reviews and follow Git‑based workflows and CI/CD practices
  • Contribute to improving engineering standards and development processes

AI‑Assisted Development

  • Use AI coding assistants and agent‑based tools as part of daily workflows
  • Leverage AI to accelerate development, testing, refactoring, and documentation while maintaining production‑grade quality


Ideal Candidate Profile

  • 3–5 years of experience in data engineering or analytics engineering
  • Strong SQL expertise with a track record of building production‑ready datasets
  • Hands‑on experience with Snowflake, dbt, and ELT tools
  • Curious about the business context behind data and analytics
  • Comfortable working independently in a remote environment
  • Interested in leveraging AI to modernize and accelerate data engineering workflows


Qualifications

Required

  • Bachelor’s degree in Computer Science, Information Systems, Data Science, or equivalent experience
  • 3–5 years of experience building and maintaining cloud‑based data pipelines
  • Advanced SQL skills and strong data modeling fundamentals
  • Experience with Snowflake, dbt, and Fivetran
  • Proficiency in Python for scripting and integrations
  • Comfortable with Git, code reviews, and CI/CD
  • Experience using AI coding assistants in data engineering workflows
  • Strong communication skills and attention to detail

Preferred

  • Experience with manufacturing, ERP, supply chain, or project‑based data
  • Familiarity with dimensional modeling (star schemas, fact/dimension tables)


Location & Travel

  • Fully remote within the United States
  • Up to 10% travel for team collaboration, project kickoffs, and stakeholder meetings

Similar Jobs

Explore other opportunities that match your interests

Senior HR Data Analyst

Data Science
4h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Convergint

United State

Senior Data Engineer

Data Science
4h ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Entry level

rg&t solutions

United State

Senior Marketing Data Scientist

Data Science
5h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

USAA

United State

Subscribe our newsletter

New Things Will Always Update Regularly