Senior Data Engineer to build and maintain systems supporting products and analytics at ZYTLYN Technologies in Geneva, Switzerland. Focus on data pipelines, architecture, and quality to empower the company's AI solutions. Utilize strong technical skills and problem-solving abilities to drive results.
Key Highlights
Key Responsibilities
Technical Skills Required
Benefits & Perks
Nice to Have
Job Description
Who We Are
ZYTLYN Technologies empowers companies across the $11 trillion travel industry to shape the future with predictive AI solutions that augment commercial planning, sales, marketing, retailing and operations. We work with some of the largest travel brands in the world, and our vision is to answer highly detailed and granular questions about the future of travel, such as demand, supply, market fluctuations and pricing. Our core focus is on airlines, airports, travel agencies, destinations, tourism boards, hotels, car rentals, travel retailers, and luxury brands.
Who We Are Looking For
As a Senior Data Engineer, you'll be responsible for building and maintaining the systems that support our products and analytics. You'll have the opportunity to take ownership of key pipelines, influence the technical direction of our platform, and collaborate closely with engineers and data scientists to deliver reliable, high-quality data.
Location / Contract type
- Geneva, Switzerland Office/Hybrid, or;
- Full remote contractor (GMT+1 to GMT+4)
We have a culture that focuses on empowering people, with team members working in our HQ (Geneva, Switzerland), and all across Europe (e.g. France, Spain, Italy, Poland, UK). We believe a diverse team creates better outcomes and fosters a better environment for learning and growth. We put a lot of emphasis on communication, listening, efficient processes and trusting our team. We rely on each other, and work together to achieve our common goals. We believe in working smart, with strong focus and intensity, tackling every challenge as a team.
Your work
- Own the design, build, and maintenance of reliable batch pipelines using PySpark and Python;
- Influence the future direction of our data platform, with potential to design and implement streaming pipelines;
- Design and optimise data architecture on AWS;
- Ensure high-quality, observable data flows into downstream systems that power analytics, product features and decision-making;
- Champion solid engineering practices (CI/CD, testing, Git workflows);
- Ensure the quality and suitability of datasets for downstream use;
- Collaborate with product managers, engineers and data scientists to deliver trusted datasets and support model development/deployment
Looking to advance your Data Science career with relocation support? Explore Data Science Jobs with Relocation Packages that include comprehensive packages to help you move and settle in your new role.
Basic requirements
- 5+ years of data engineering experience building large-scale data platforms;
- Proven hands-on experience with Spark, AWS, Python/Scala, SQL - familiarity with Kafka is a plus;
- Strong experience with AWS Lambda, AWS S3 and Athena;
- Track record of orchestrating, monitoring, and maintaining high-volume batch pipelines across distributed systems and cloud environments;
- Proficiency with Docker and containerised deployments;
- Strong engineering practices: CI/CD pipelines, automated testing, GitHub/GitLab Flow;
- Resourceful self-starter, comfortable with ambiguity and shifting priorities in a startup;
- Highly organised, disciplined, and detail-oriented;
- Excellent communication and listening skills — verbal, written
Discover our full range of relocation jobs with comprehensive support packages to help you relocate and settle in your new location.
- Familiarity with Kafka or similar event-streaming technologies, and exposure to building/maintaining streaming pipelines;
- Familiarity with AWS-native solutions like Step Functions;
- Experience designing data warehousing solutions (Redshift, Snowflake, BigQuery);
- Experience with Infrastructure as Code (Terraform, CloudFormation);
- Exposure to MLflow or similar tools, and familiarity with model deployment workflows;
- Awareness of data quality practices and governance principles;
- Familiarity with Kubernetes for container orchestration
What we offer
- Join a team of exceptional talent — At ZYTLYN, we hire thoughtfully and selectively, bringing together a small, focused team of high performers. We believe that a lean and empowered team moves faster, builds smarter, and achieves more. You'll collaborate with driven colleagues who value efficiency, ownership, and impact
- Swiss employment contract based in Geneva/hybrid, or full remote b2b contract option
- Competitive salary- adjusted for experience and market benchmarks
Similar Jobs
Explore other opportunities that match your interests
Process Engineer
Mondelēz International
Senior Insurance Data Scientist
Coalition, Inc.
Digital Transformation Engineer