Senior Backend and Data Engineer - AI and Machine Learning

west end workforce United State
Relocation
Apply
AI Summary

Design and deliver high-concurrency services using Node.js and Python. Engineer robust data transformation and ML pipelines. Architect resilient solutions using GCP and AWS.

Key Highlights
Design and deliver event-driven services using Node.js and JavaScript or TypeScript
Engineer robust data transformation, orchestration, and advanced ML pipelines in Python
Architect resilient, production-grade solutions using GCP and AWS
Technical Skills Required
Node.js JavaScript TypeScript Python Express Fastify NestJS asyncio Pydantic FastAPI GCP AWS Google Cloud Pub/Sub Kafka BigQuery Firestore Redis PostgreSQL ClickHouse Elastic Cloud Functions Cloud Run Dataflow Apache Beam Docker Kubernetes Terraform GitHub Actions OpenTelemetry Prometheus Grafana Cloud Logging and Monitoring OAuth2 OIDC Protobuf Avro
Benefits & Perks
Competitive salary
Early-stage equity
Relocation package
Hybrid and on-site work option

Job Description


Staff Engineer (AI • Backend • Data) | backend/data engineering with Node.js and Python and AI


*Ready to push boundaries in backend and data engineering for next-generation AI products?*


About the Company

We are an early-stage AI startup based in Miami, Florida. We are redefining the way organizations leverage data and automation for real-world operational impact. We are building the foundational data backbone and cognitive architecture that powers truly autonomous software systems. Our mission is to deliver platforms where artificial intelligence actively learns, adapts, and optimizes business outcomes; moving far beyond reporting dashboards into the realm of self-improving intelligence.


About the Role

Ready to push boundaries in backend and data engineering for next-generation AI products? As we scale rapidly from Seed to Series A, your work will directly shape the “nervous system” that enables products to learn from mistakes, deliver real-time insights, and support exponential growth.


Responsibilities

  • Design and deliver event-driven, high-concurrency services using Node.js and Javascript or TypeScript. Build real-time APIs and streaming platforms with frameworks such as Express, Fastify, or NestJS.
  • Engineer robust data transformation, orchestration, and advanced ML pipelines in Python. Leverage technologies including asyncio, Pydantic, and FastAPI for scalable, asynchronous applications.
  • Architect resilient, production-grade solutions using GCP. Experience with AWS or Azure equivalents is valued too.
  • Build streaming infrastructure with Google Cloud Pub/Sub or Kafka, incorporate dead-letter queues, enable message replay, and support schema evolution via Protobuf or Avro.
  • Optimize BigQuery for high-performance analytics: incorporate partitioning, clustering, materialized views, and proactive cost management.
  • Model and synchronize real-time operational data in Firestore using denormalization techniques; support caching and queueing with Redis, relational workloads via PostgreSQL, and utilize ClickHouse or Elastic for time-series and search needs.
  • Orchestrate modern processing workflows using Cloud Functions, Cloud Run (serverless transforms), and Dataflow or Apache Beam for both streaming and batch processing. Automate DAGs with Cloud Composer or Dagster.
  • Manage and scale all services on Docker and Kubernetes (GKE). Provision and update infrastructure with Terraform, and deploy rapidly with GitHub Actions—using progressive delivery and canary releases to maintain stability.
  • Lead observability initiatives with OpenTelemetry, Prometheus, Grafana, Cloud Logging and Monitoring, and structured tracing for real-time operational excellence.
  • Enforce security, compliance, and audit standards using OAuth2 or OIDC, scoped tokens, robust secrets management, audit trails, and least-privilege IAM.


Qualifications

  • Experienced in backend and data engineering with direct production exposure to high concurrency, event-driven Node.js and JS/TS systems.
  • Advanced Python engineer comfortable building asynchronous data flows, orchestrating ML pipelines, and integrating with the latest frameworks.
  • Strong track record working with GCP (or equivalent cloud platforms), mastering modern streaming and analytical data stacks.
  • Deep familiarity with distributed, real-time messaging systems, schema evolution, and automated, resilient system deployment.
  • Infrastructure-centric: proven ability to build, automate, and monitor microservices, containers, and orchestration frameworks supporting startup velocity.
  • Comfortable shipping at startup speed, tackling complex problems, and delivering reliable, scalable solutions for rapid user and data growth.

Preferred Skills

  • Built multi-tenant learning loops or adaptive AI pipelines in production.
  • Delivered advanced monitoring, alerting, and automated recovery for large distributed systems.
  • Implemented security, compliance, and operational controls in regulated environments.
  • Help architect the core infrastructure of a transformative AI startup at an inflection point.
  • Collaborate with a team that values radical ownership, open communication, and relentless curiosity.
  • Competitive salary, early-stage equity, and the opportunity to innovate at the front lines of autonomous data systems.
  • Direct impact: your work will power business-critical AI products and fuel our climb from early traction to market leadership.


Equal Opportunity Statement

We value direct, clear feedback, and systems thinking over corporate fluff and politics. Hybrid and on-site in Miami, no remote options. Relocation package provided.


Subscribe our newsletter

New Things Will Always Update Regularly