Design, build, and maintain scalable data pipelines using Apache Airflow or similar orchestration tools. Develop and optimize advanced SQL queries to support analytics, reporting, and data validation. Collaborate with technical and non-technical stakeholders to translate requirements into durable solutions.
Key Highlights
Key Responsibilities
Technical Skills Required
Benefits & Perks
Job Description
Data never stops moving. Your work gives it direction, discipline, and trust.
About the Role
We are seeking a Data Engineer who thrives at the intersection of systems, logic, and scale. Someone who builds resilient data pipelines, safeguards data quality, and loves solving complex problems — whether in production or on a LeetCode challenge.
Location: Fully Remote (U.S. Based)
Duration: 6+ Month Contract
Type: W-2 Contract Only – C2C, third-party, or sponsorship arrangements are not supported at this time or in the future.
Responsibilities
- Design, build, and maintain scalable, reliable pipelines using Apache Airflow or similar orchestration tools.
- Develop and optimize advanced SQL queries to support analytics, reporting, and data validation.
- Work extensively with Hive and distributed query engines to process large datasets efficiently.
- Use Python to automate workflows, transform data, and improve pipeline reliability.
- Implement data quality monitoring, validation rules, and alerting to ensure trusted data delivery.
- Collaborate with technical and non-technical stakeholders to translate requirements into durable solutions.
- Own tasks end-to-end, managing priorities and delivering independently.
- Create and maintain clear technical documentation for maintainability and team knowledge sharing.
- Continuously optimize pipelines for performance, reliability, and scalability.
- Apply strong algorithmic thinking and problem-solving skills, ideally demonstrated through platforms like LeetCode, HackerRank, or similar.
Interested in remote work opportunities in Data Science? Discover Data Science Remote Jobs featuring exclusive positions from top companies that offer flexible work arrangements.
Qualifications
- Expert in SQL, including complex queries, performance tuning, and large datasets.
- Strong hands-on experience with Python for automation and data manipulation.
- Experience building pipelines using Apache Airflow or similar orchestration tools.
- Hands-on experience with Hive and large-scale data warehouses.
- Familiarity with distributed data processing and performance optimization.
- Experience implementing data quality frameworks and alerting.
- Passion for problem-solving, algorithmic thinking, and coding challenges.
- Ability to work independently while collaborating across teams.
Browse our curated collection of remote jobs across all categories and industries, featuring positions from top companies worldwide.
Preferred Skills
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- 3+ years in data engineering or large-scale data systems.
- Familiarity with Presto/Trino, Spark, or other modern data tools.
- Experience applying automation or AI-driven approaches to optimize workflows.
Why This Role:
You’ll work where logic meets scale, transforming complexity into clarity. You’ll solve problems that matter, write code that endures, and pipelines that never sleep. If you love LeetCode, algorithms, and building systems people rely on, this is the place to thrive.
Equal Opportunity Statement
We are committed to diversity and inclusivity.
Similar Jobs
Explore other opportunities that match your interests
Crew
Senior Actuarial Data Engineer
Lincoln Financial