AI Summary
Design, develop, and maintain scalable data pipelines and analytics solutions. Collaborate with cross-functional teams and internal stakeholders. Work with cutting-edge technologies and contribute to enterprise-level data modernization projects.
Key Highlights
Design and develop scalable data pipelines
Collaborate with cross-functional teams
Work with cutting-edge technologies
Technical Skills Required
Benefits & Perks
Hourly rate of 23.75 USD to 25 USD
Fully remote position
Flexible 35-40 hour work week
Job Description
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Senior Data Engineer in Latin America.
In this role, you will design, develop, and maintain scalable, high-performance data pipelines and analytics solutions across multiple departments. You will work autonomously in a remote environment, applying modern data engineering tools and cloud-based platforms to deliver production-ready data solutions. Your work will directly impact business intelligence, analytics, and reporting, enabling teams to make data-driven decisions efficiently. Collaboration with cross-functional teams and internal stakeholders will be key to translating complex requirements into robust, reliable data workflows. This position offers opportunities to work with cutting-edge technologies and contribute to enterprise-level data modernization projects.
- Accountabilities
- Design, develop, and optimize scalable data pipelines using Databricks, Spark, Delta Lake, and notebook-based workflows.
- Build, automate, and maintain ETL/ELT workflows aligned with organizational standards.
- Support data modeling, pipeline orchestration, and data quality initiatives within cloud environments.
- Collaborate with cross-functional teams to deliver reliable, production-ready data solutions.
- Develop and maintain data integrations with Oracle database environments and Microsoft Fabric for analytics and reporting.
- Provide support for legacy Microsoft data stack tools (SQL Server, SSIS, SSRS, SSAS) when needed.
- Partner with stakeholders to refine requirements, optimize data workflows, and ensure accessibility and reliability of datasets.
- Requirements
- Extensive hands-on experience with Databricks, including Spark, Delta Lake, and notebook-based development.
- Strong proficiency in Python, PySpark, SQL, and distributed data processing.
- Proven experience with cloud data engineering and enterprise-scale data pipelines.
- Familiarity with Microsoft Fabric, Power BI, or related tooling.
- Working knowledge of legacy Microsoft data stack (SQL Server, SSIS, SSRS, SSAS) and Oracle databases.
- Ability to develop scalable, secure ETL/ELT pipelines following best practices.
- Strong documentation, communication, and stakeholder-management skills.
- Nice-to-haves: experience with data quality frameworks, testing, monitoring, Azure Data Factory, Synapse, or migrating legacy BI/data systems.
- Benefits
- Independent contractor role with hourly rate of 23.75 USD to 25 USD, depending on experience.
- Fully remote position within Latin America.
- Flexible 35–40 hour work week.
- Opportunity to work with cutting-edge data engineering tools and cloud platforms.
- High degree of autonomy and ownership over technical deliverables.
- Collaboration with a skilled, international team on impactful enterprise projects.
When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly.
🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements.
📊 It compares your profile to the job’s core requirements and past success factors to determine your match score.
🎯 Based on this analysis, we automatically shortlist the three candidates with the highest match to the role.
🧠 When necessary, our human team may perform an additional manual review to ensure no strong profile is missed.
The process is transparent, skills-based, and free of bias — focusing solely on your fit for the role. Once the shortlist is completed, we share it directly with the company that owns the job opening. The final decision and next steps (such as interviews or additional assessments) are then made by their internal hiring team.
Thank you for your interest!
By submitting an application to this posting, the applicant acknowledges that Jobgether will process their personal data as necessary to evaluate their candidacy, provide feedback, and, when appropriate, share relevant information with potential employers. Such processing is carried out on the basis of legitimate interest and pre-contractual measures in accordance with applicable data protection laws. The applicant may exercise their rights of access, rectification, erasure, and objection at any time as provided under the GDPR.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.