AWS Data Engineer

Smart IT Frame LLC • United State
Relocation
Apply
AI Summary

Design, build, and maintain scalable data pipelines and ETL solutions using Python/Pyspark and AWS managed services. Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions. Contribute to CI/CD, infrastructure-as-code, and documentation for reproducible deployments.

Key Highlights
Design and build scalable data pipelines and ETL solutions
Collaborate with data consumers and stakeholders
Contribute to CI/CD and infrastructure-as-code
Key Responsibilities
Build and maintain ETL pipelines using Python and PySpark on AWS Glue and other compute platforms
Orchestrate workflows with AWS Step Functions and serverless components (Lambda)
Implement messaging and event-driven patterns using AWS SNS and SQS
Technical Skills Required
Python Pyspark AWS Glue AWS Lambda AWS SNS AWS SQS Amazon Redshift SQL Git DevOps CI/CD
Benefits & Perks
Full-time employment
Relocation is fine
Day one onsite at Herndon, VA

Job Description


Role: AWS Data Engineer

Location: Reston, VA day one onsite (candidates come for in person at Herndon VA Relocation is fine )

Position Type: Fulltime


Job Description:

Seeking an AWS Data Engineer to design, build, and maintain scalable data pipelines and ETL solutions using Python/Pyspark and AWS managed services to support analytics and data product needs.


Key Responsibilities:

  • Build and maintain ETL pipelines using Python and PySpark on AWS Glue and other compute platforms
  • Orchestrate workflows with AWS Step Functions and serverless components (Lambda)
  • Implement messaging and event-driven patterns using AWS SNS and SQS
  • Design and optimize data storage and querying in Amazon Redshift
  • Write performant SQL for data transformations, validation, and reporting
  • Ensure data quality, monitoring, error handling and operational support for pipelines
  • Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions
  • Contribute to CI/CD, infrastructure-as-code, and documentation for reproducible deployments


Required Skills:

  • Strong experience with Python and Pyspark for large-scale data processing
  • Proven hands-on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions
  • Solid SQLSQL skills and familiarity with data modeling and query optimization
  • Experience with ETL best practices, data quality checks, and monitoring/alerting
  • Familiarity with version control (Git) and basic DevOps/CI-CD workflows


Similar Jobs

Explore other opportunities that match your interests

Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

medilinkers llc

United State

Information Services Director

Devops
•
12h ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Director

montrose regional health

United State

Cloud Engineer

Devops
•
17h ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Entry level

hirenza

United State

Subscribe our newsletter

New Things Will Always Update Regularly