Junior Backend AI & Data Pipeline Engineer

seeka limited • Pakistan
Remote
Apply
AI Summary

We are looking for a Backend AI & Data Pipeline Engineer to own the end-to-end data processing infrastructure that powers Yuzee's intelligent course and job matching platform. The role involves designing and maintaining scalable, event-driven pipelines, generating semantic embeddings, and building a knowledge graph linking jobs, courses, skills, and industries. The ideal candidate has 1+ years of backend engineering experience focused on data pipelines, ML infrastructure, or search systems.

Key Highlights
Design and maintain three distinct processing pipelines
Generate and manage semantic embeddings
Build and maintain a knowledge graph
Key Responsibilities
Design and maintain three distinct processing pipelines
Generate and manage semantic embeddings
Build and maintain a knowledge graph linking jobs, courses, skills, and industries
Technical Skills Required
Python AWS serverless and container services Pandas async processing bulk database operations text cleaning MongoDB Atlas Vector Search FP-Growth association rules LLMs for re-ranking or eligibility assessment
Benefits & Perks
Fully remote / work-from-home role
Flexible working hours within the team's expected schedule and business needs
Opportunity to work on real backend, data, and AI infrastructure projects
Nice to Have
Experience with knowledge graphs or association rule mining (FP-Growth, Apriori)
Experience using LLMs for re-ranking or eligibility assessment on top of vector retrieval results

Job Description


Company Description

We believe strong early career roles should do more than give someone a job title. They should help people grow into the kind of professionals they want to become. At SEEKA Technologies, we give junior team members meaningful work that builds real capability, sharpens problem-solving, and develops practical experience in fast-moving business and technology environments. Our goal is to help emerging talent strengthen their skills, expand their potential, and prepare for a future shaped by constant innovation across business and IT.

SEEKA Technologies (Not Seeka Limited) is a project under its parent organisation, called Fresh Futures Australia, which is an education consultant based in both Australia and Malaysia. We are developing and creating a platform that utilises A.I. to help match students and job seekers to the right opportunities relevant to them from Kindergarten up to the University, along with vocational training centres and language schools, and of course to businesses and companies who need the right candidates. Our mission is to make it easier for anyone to find, filter and apply to educational institutions and companies in a more seamless manner.

We are currently looking to hire a junior Backend AI & Data Pipeline Engineer who wants to build real-world experience in backend systems, data processing, scraping, retrieval, and cloud-based infrastructure. This role is ideal for someone who already has hands-on technical experience and wants to grow further by working on meaningful engineering challenges that support Yuzee’s intelligent matching platform. You will contribute to the systems that process data, power search and matching, and improve the efficiency, reliability, and scalability of our platform.

Below are the important details you will need to take note of:

  • English is the primary language used in the role
  • This is a full-time remote/work-from-home position
  • We welcome both local and international candidates
  • Candidates should have a degree or proven practical experience relevant to the role

Job Description

About The Role

We are looking for a Backend AI & Data Pipeline Engineer to own the end-to-end data processing infrastructure that powers Yuzee's intelligent course and job matching platform. You will design and maintain scalable, event-driven pipelines that process tens of thousands of daily records, generate semantic embeddings, and feed a growing knowledge graph used for personalised career pathway recommendations.

What You'll Do

  • Design and maintain three distinct processing pipelines — scheduled job ingestion, event-driven course processing, and a periodic knowledge graph builder — each with independent trigger logic and cost controls
  • Generate and manage semantic embeddings via Amazon Bedrock (Titan v2), index them in MongoDB Atlas Vector Search, and calibrate similarity thresholds to ensure match accuracy
  • Build and maintain a knowledge graph linking jobs, courses, skills, and industries using FP-Growth association rules and archetype-to-SOC code mapping
  • Build and improve a two-stage discovery and matching API on AWS Lambda — vector retrieval first, then deep eligibility scoring with LLM re-ranking
  • Right-size Fargate Spot instances and design resumable processing loops that tolerate interruption, keeping infrastructure costs under control as data volume scales
  • Maintain and improve daily job scrapers across multiple sources and build institution data scrapers with robust HTML cleaning pipelines

What We're Looking For

  • 1+ years of backend engineering experience focused on data pipelines, ML infrastructure, or search systems
  • Hands-on experience with AWS serverless and container services — Lambda, ECS Fargate, EventBridge, and Step Functions
  • Strong Python skills — Pandas, async processing, bulk database operations, and text cleaning
  • Familiarity with vector databases and semantic similarity search; MongoDB Atlas Vector Search experience is a strong plus
  • Cost-conscious infrastructure mindset — you think in per-record compute costs, free tiers, Spot resilience, and right-sizing
  • Ability to document and communicate complex architecture clearly to both technical and non-technical stakeholders

Nice to have

  • Experience with knowledge graphs or association rule mining (FP-Growth, Apriori)
  • Experience using LLMs for re-ranking or eligibility assessment on top of vector retrieval results
  • Background in edtech, jobtech, or recommendation/matching systems

Qualifications

Degree or existing proven experience

Additional Information

Use this instead:

Benefits

  • Fully remote / work-from-home role
  • Flexible working hours within the team’s expected schedule and business needs
  • Opportunity to work on real backend, data, and AI infrastructure projects
  • Exposure to practical engineering challenges in scraping, pipelines, retrieval, and cloud systems
  • Ongoing growth and development within a fast-moving technology environment
  • Opportunity to build long-term value and grow with the company based on performance, including progression and increased responsibility over time

A slightly more polished version:

Benefits

  • Fully remote / work-from-home position
  • Some flexibility in working hours, depending on team requirements and deliverables
  • Hands-on experience working on meaningful backend, data pipeline, and AI-related systems
  • Opportunity to contribute to a growing platform with real product and engineering challenges
  • Professional growth in a practical, fast-paced environment
  • Strong potential for long-term progression based on performance, regardless of location

If you want it to sound more attractive for hiring, the strongest version would be to add things like:

  • ownership
  • real product impact
  • career growth
  • direct exposure to architecture and scaling decisions

Similar Jobs

Explore other opportunities that match your interests

Full-Time Web Developer (Remote)

Programming
•
4d ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Entry level

digital shaping

Pakistan

Senior Software Developer/Team Lead

Programming
•
5d ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

cobana energy

Pakistan

AI & Automation Specialist

Programming
•
1w ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

huzzle.com

Pakistan

Subscribe our newsletter

New Things Will Always Update Regularly