AI Summary
Join a dynamic, cloud-focused team as a Senior Data Engineer. Design scalable data pipelines, manage AWS infrastructure, and collaborate with cross-functional teams to drive data-driven decisions.
Key Highlights
Design and maintain AWS-based data architectures including Data Lakes (S3) and Data Warehouses (Redshift).
Develop ETL/ELT pipelines using AWS Glue and EMR (Spark).
Implement CI/CD pipelines for data workflows using GitLab CI.
Create event-driven serverless architectures using AWS Lambda, EventBridge, and Step Functions.
Configure data governance and access policies with AWS Lake Formation.
Technical Skills Required
Benefits & Perks
Competitive salary
Fully remote work
Professional growth and training
Access to modern cloud technologies
Health, dental, and other benefits
Job Description
This position is posted by Jobgether on behalf of a partner company. We are currently looking for an Engenheiro de Dados Pleno/Senior in Brazil.
We are seeking a skilled Data Engineer with a strong automation mindset to join a dynamic, cloud-focused team. In this role, you will act as the bridge between AWS cloud infrastructure and business intelligence, designing scalable, resilient, and secure data pipelines. You will transform large volumes of raw data into optimized formats for analysis, enabling data-driven decisions across the organization. The role involves close collaboration with data analysts, data scientists, and software engineers, and offers the opportunity to work on cutting-edge cloud technologies and automation processes. This position is fully remote, providing flexibility while working in a collaborative, innovation-driven environment.
- Accountabilities:
- Design, build, and maintain AWS-based data architectures, including Data Lakes (S3) and Data Warehouses (Redshift), ensuring high availability and efficient partitioning.
- Develop ETL/ELT pipelines using AWS Glue and EMR (Spark), optimizing for performance and cost.
- Manage infrastructure as code (IaC) with Terraform, ensuring all resources are provisioned programmatically.
- Implement CI/CD pipelines for data workflows using GitLab CI, incorporating version control and automated testing.
- Create event-driven serverless architectures using AWS Lambda, EventBridge, and Step Functions for both real-time and batch processing.
- Configure data governance and access policies with AWS Lake Formation, ensuring compliance with LGPD and information security standards.
- Implement monitoring and observability with CloudWatch, proactively managing pipeline health and incident resolution.
- Collaborate with cross-functional teams to understand requirements and deliver efficient, scalable data solutions.
- Requirements:
- Bachelorโs degree in Computer Science, Data Engineering, Software Engineering, or equivalent.
- Proven experience as a Data Engineer or in a similar role.
- Advanced Python skills (data manipulation and scripting) and SQL expertise.
- Hands-on experience with AWS core services: S3, Redshift, Glue, Lambda, Athena, and IAM.
- Mandatory experience with Terraform for infrastructure management.
- Strong understanding of cloud-based data architectures and best practices.
- Excellent problem-solving, analytical, and communication skills.
- Experience collaborating in cross-functional teams, supporting data-driven decision-making.
- Additional Desired Skills:
- Advanced orchestration experience with Apache Airflow (self-hosted or MWAA).
- Streaming knowledge using Amazon Kinesis or Kafka.
- Familiarity with containerization and orchestration tools such as Docker and Kubernetes (EKS).
- Experience with DataOps/DevOps practices for automating deployment of data pipelines.
- AWS certifications (Certified Data Analytics or Solutions Architect) are a plus.
- Benefits:
- Competitive salary commensurate with experience.
- Fully remote work with flexible arrangements.
- Opportunities for professional growth and training.
- Access to modern cloud technologies and hands-on experience with large-scale data systems.
- Collaborative and innovation-driven work environment.
- Health, dental, and other benefits depending on company policy.
When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly.
๐ Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements.
๐ It compares your profile to the jobโs core requirements and past success factors to determine your match score.
๐ฏ Based on this analysis, we automatically shortlist the three candidates with the highest match to the role.
๐ง When necessary, our human team may perform an additional manual review to ensure no strong profile is missed.
The process is transparent, skills-based, and free of bias โ focusing solely on your fit for the role. Once the shortlist is completed, we share it directly with the company that owns the job opening. The final decision and next steps (such as interviews or additional assessments) are then made by their internal hiring team.
Thank you for your interest!
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.