This position involves designing and building scalable data pipelines using modern cloud data stacks for global clients. It requires expertise in data ingestion, transformation, and processing in AWS, Azure, or GCP environments. The role is fully remote, long-term, and open to engineers with cloud and big data experience.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
We are hiring multiple Data Engineers to join international data platform, analytics, and cloud engineering teams. These fully remote, long-term freelance roles are ideal for engineers who can build scalable data pipelines, work with modern cloud-native data stacks, and support large-scale enterprise data initiatives.
Open Roles (Multiple Positions)We are recruiting across core and specialized data engineering areas:
Core Data Engineering
- Data Engineer
- Senior Data Engineer
- Cloud Data Engineer (AWS / Azure / GCP)
Specialized Roles
- ETL / ELT Developer
- Big Data Engineer (Spark / Hadoop / Databricks)
- Data Pipeline Engineer
- Data Platform Engineer
- Streaming Data Engineer (Kafka / Kinesis / Pub/Sub)
If you have strong experience building data systems or pipelines, we encourage you to apply.
Engagement Details- Type: Independent Freelance Consultant
- Location: 100% Remote
- Duration: Initial 6–12 month contract (extendable to multi-year)
- Start Date: Immediate or within the next few weeks
- Clients: Global enterprises, SaaS companies, and cloud-first data teams
- Design and build scalable, reliable data pipelines using modern data engineering tools and frameworks.
- Develop ETL/ELT workflows for structured, semi-structured, and unstructured data.
- Implement data ingestion, transformation, storage, and processing solutions.
- Work with cloud-native data services (AWS Glue, Redshift, EMR, Azure Data Factory, Synapse, GCP BigQuery, Dataflow).
- Build batch and streaming data pipelines using Spark, Databricks, Kafka, or similar technologies.
- Optimize performance, cost, and reliability of data systems for large-scale deployments.
- Collaborate with analytics, BI, ML, and backend teams to deliver end-to-end data solutions.
- Ensure data quality, integrity, governance, and security across data workflows.
- Support CI/CD pipelines, version control, and automation related to data environments.
- Minimum 2 years of hands-on experience as a Data Engineer.
- Strong experience with Python or SQL (or both).
- Practical knowledge of data pipeline development using Spark, PySpark, or equivalent.
- Hands-on experience with one major cloud platform (AWS, Azure, or GCP).
- Understanding of data modeling, warehousing concepts, and distributed systems.
- Experience working with ETL/ELT tools or frameworks.
- Ability to work independently in a remote, distributed setup.
- Experience with Databricks or large-scale Spark clusters.
- Knowledge of streaming technologies (Kafka / Kinesis / Pub/Sub / Flink).
- Experience working with data lakes (S3, ADLS, GCS) and lakehouse architectures.
- Exposure to orchestration tools such as Airflow, Dagster, Prefect, or AWS Step Functions.
- Familiarity with containerization (Docker) and orchestration (Kubernetes).
- Experience integrating with BI, ML, or analytics platforms.
- Cloud certifications (AWS / Azure / GCP Data Engineer) are a plus.
- Large-scale, cloud-native data engineering projects.
- Multiple openings with fast-track onboarding.
- Fully remote with flexible working hours.
- Long-term freelance roles with consistent project work.
- Work with modern data stacks, lakehouse architectures, and global teams.
Send your CV to Careers@SkillsCapital.io