Senior Data Engineer

signature it world inc • United State
Relocation
Apply
AI Summary

Lead the development, maintenance, and optimization of data pipelines and workflows within the Enterprise Data Platform. Collaborate with cross-functional teams to solve complex data challenges and create impactful solutions. Apply strong data engineering fundamentals and software engineering practices.

Key Highlights
Lead data pipeline development and maintenance
Collaborate with cross-functional teams
Apply data engineering fundamentals and software engineering practices
Key Responsibilities
Lead the design, development, and maintenance of scalable data pipelines
Build pipelines and workflows as code
Partner with data scientists and analysts to gather requirements
Technical Skills Required
Databricks Snowflake PySpark SQL Python ETL/ELT tools Git-based development CI/CD for data pipelines
Benefits & Perks
Hybrid work arrangement
Relocation workable
Video interview
Nice to Have
IICS
Airflow
Terraform
CloudFormation

Job Description


Role-Data Engineer

Preferred location - Omaha, Ne (1st preference) or Chicago, IL / relocation workable Hybrid (Tuesday, Wednesday, Thursday in office)

Total positions - Multiple Must Have Skills: Databricks, Snowflake, Pyspark (Hands-on experience in

development)

Experience required: 6+ years

Video interview

.


Job Details:

Minimum years of experience required: 5-8 years Certification needed: Not mandatory Must Have Skills: Databricks, Snowflake, Pyspark Nice to Have

Skills: IICS, Python


Detailed Job Description:

Skill Set:

As a Senior Data Engineer, you will play a key role in leading the development, maintenance, and optimization of data pipelines and workflows within our Enterprise Data Platform. You’ll apply strong data engineering fundamentals along with software engineering and DevOps practices, so pipelines are built, deployed, and monitored as code. Your work will help ensure data accuracy, reliability, and accessibility, enabling teams across the organization to make informed decisions.


This position offers an opportunity to lead technical solutions, mentor engineers, and collaborate with cross-functional teams to solve complex data challenges and create impactful solutions.


Key Responsibilities:

• Lead the design, development, and maintenance of scalable data pipelines

that process and integrate data from multiple sources into the Enterprise Data Platform.

• Build pipelines and workflows as code using modern engineering practices

(version control, code reviews, automated testing, reusable components).

• Define and implement patterns for CI/CD for data pipelines (automated

builds, tests, deployments, and environment promotion).

• Partner with data scientists, analysts, and business teams to gather

requirements and translate them into robust data solutions.

• Build and optimize SQL queries and transformations to support complex

business use cases and analytics needs.

• Design and manage data models; validate them with business stakeholders,

data architects, and governance partners.

• Establish data quality checks, validation, and troubleshooting practices

to ensure accuracy, consistency, and trust in data products.

• Monitor and optimize pipeline performance and reliability; implement

observability (logging/metrics/alerts) and contribute to operational runbooks.

• Drive automation to improve efficiency, reduce manual effort, and increase

repeatability of platform operations.

• Provide technical leadership through mentoring, reviews, and guidance on

best practices and standards.

• Participate in Agile ceremonies to plan, estimate, and deliver work

efficiently.

• Create and maintain documentation for data workflows, transformations,

standards, and operational procedures.


Technical Skills:

• Bachelor’s degree in computer science, Information Systems, or a related

field (or equivalent experience).

• 5–8 years of experience in data engineering or a related role.

• Advanced proficiency in SQL for complex data transformation and analysis.

• Hands-on experience with cloud-based data platforms such as Databricks,

Snowflake, or similar tools.

• Experience with ETL/ELT tools and frameworks (e.g., Informatica, Talend,

dbt, or equivalent).

• Strong proficiency in Python and/or PySpark for data processing and

pipeline development.

• Strong understanding of data modeling, database design principles, and

building curated datasets for analytics and operational use cases.

• Experience with DevOps practices and Git-based development (branching

strategies, pull requests, code reviews).

• Experience implementing CI/CD for data pipelines/workflows and managing

deployments across environments.

• CPG Domain Knowledge will be a plus.

• Familiarity with orchestration and workflow tools (e.g., Databricks

Workflows, Airflow, or similar) is preferred.

• Familiarity with Infrastructure as Code (e.g., Terraform, CloudFormation)

and/or containerization concepts is a plus.

• Strong problem-solving skills, attention to detail, and ability to

troubleshoot complex issues end-to-end.

• Excellent communication skills and ability to collaborate across technical

and non-technical teams.


Similar Jobs

Explore other opportunities that match your interests

Senior Database Consultant

Data Science
•
6h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Esri

United State

Senior Data Engineer / Data Architect

Data Science
•
16h ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

ai talent hope

United State

Data Scientist 2

Data Science
•
21h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

pacific northwest national lab...

United State

Subscribe our newsletter

New Things Will Always Update Regularly