Senior Data Engineer (Remote, Romania)

smartx net apps • Romania
Remote
Apply
AI Summary

Seeking a Senior Data Engineer to own and maintain the data infrastructure, focusing on reliability and incremental improvements. Responsibilities include Snowflake and dbt management, semantic layer development, and AI-augmented engineering. Requires 5+ years of experience and fluency in Romanian and English.

Key Highlights
Full ownership of data infrastructure, maintenance, and reliability.
Leverage AI and LLMs for automation and efficiency in data engineering tasks.
Fully remote role within Romania with strong collaboration requirements.
Key Responsibilities
Own the day-to-day reliability and maintenance of data pipelines and the analytics stack.
Ensure data freshness, correctness, and availability for downstream consumers.
Proactively identify and resolve pipeline failures, performance issues, and data quality risks.
Own ingestion workflows (e.g. AWS S3 to Snowflake), including copy commands, transformations, and monitoring.
Support cost optimization and performance tuning within the warehouse.
Maintain and extend dbt models, including modeling standards and best practices, testing and validation logic, and deployment workflows.
Maintain the semantic layer (Cube), including metric definitions, version control, and deployments via GitHub.
Ensure consistency between warehouse models and exposed metrics.
Own orchestration and configuration of external data tools (e.g. Fivetran, Hex), including connector setup, monitoring, sync schedules, and dependencies.
Act as technical owner for tool integrations, including API keys, service accounts, and vendor interactions.
Support and maintain Metabase, including ownership of key collections and saved queries.
Ensure data model alignment between Snowflake and end-user dashboards.
Collaborate on product analytics, review and maintain Amplitude instrumentation plans.
Ensure event definitions are consistent and correctly synced into the warehouse.
Use LLMs and AI coding assistants as standard tools in your daily development workflow.
Design, build, and maintain agentic workflows that automate repetitive data engineering tasks.
Evaluate and integrate new AI-powered tools into the data stack.
Work closely with product, analytics, and engineering teams as a fully integrated team member.
Maintain clear documentation for pipelines, models, permissions, and operational processes.
Provide guidance and best practices to analysts and stakeholders consuming data.
Technical Skills Required
Snowflake dbt AWS S3 Fivetran Cube Metabase Amplitude Git
Benefits & Perks
Fully remote within Romania
Nice to Have
Experience with product analytics platforms (e.g. Amplitude)
Familiarity with cost optimization and warehouse scaling strategies

Job Description


We are looking for a Senior Data Engineer to take ownership of our data infrastructure, with a strong focus on maintenance, reliability, and incremental improvements across existing and new data projects.


The role is fully remote within Romania. You must speak Romanian and English (C1).


This is a full-time role, with ongoing responsibilities for pipeline maintenance and close integration into the core product and data team. You will inherit a mature data stack and be responsible for keeping it secure, scalable, and well-governed, while also contributing to new initiatives as they arise.


Key Responsibilities

Data Platform Ownership & Maintenance

  • Own the day-to-day reliability and maintenance of data pipelines and the analytics stack.
  • Ensure data freshness, correctness, and availability for downstream consumers.
  • Proactively identify and resolve pipeline failures, performance issues, and data quality risks.


Snowflake & Data Warehouse Management

  • Own ingestion workflows (e.g. AWS S3 to Snowflake), including copy commands, transformations, and monitoring.
  • Support cost optimization and performance tuning within the warehouse.


Transformation & Modeling Layer

  • Maintain and extend dbt models, including:
  • Modeling standards and best practices
  • Testing and validation logic
  • Deployment workflows
  • Own the semantic layer (Cube):
  • Maintain metric definitions used by APIs and downstream tools
  • Manage version control and deployments via GitHub
  • Ensure consistency between warehouse models and exposed metrics

External Data & Tooling

  • Own orchestration and configuration of external data tools (e.g. Fivetran, Hex):
  • Connector setup and monitoring
  • Sync schedules and dependencies
  • Impact analysis of schema or source changes
  • Act as technical owner for tool integrations, including API keys, service accounts, and vendor interactions.


Analytics & Product Data Support

  • Support and maintain Metabase:
  • Ownership of key collections and saved queries
  • Data model alignment between Snowflake and end-user dashboards
  • Collaborate on product analytics:
  • Review and maintain Amplitude instrumentation plans
  • Ensure event definitions are consistent and correctly synced into the warehouse


AI-Augmented Engineering

  • Use LLMs and AI coding assistants as standard tools in your daily development workflow.
  • Design, build, and maintain agentic workflows that automate repetitive data engineering tasks such as pipeline monitoring, anomaly detection, schema change analysis, and documentation generation.
  • Evaluate and integrate new AI-powered tools into the data stack where they improve velocity, quality, or reliability.


Collaboration & Documentation

  • Work closely with product, analytics, and engineering teams as a fully integrated team member.
  • Maintain clear documentation for pipelines, models, permissions, and operational processes.
  • Provide guidance and best practices to analysts and stakeholders consuming data.


Required Experience & Skills

  • 5+ years of experience as a Data Engineer, with senior-level ownership of production data systems
  • Strong hands-on experience with:
  • Snowflake (performance, security, cost awareness)
  • dbt (modeling, testing, deployments)
  • Cloud data ingestion patterns (especially AWS S3 → warehouse)
  • Experience with modern analytics stacks and tools such as:
  • Fivetran or similar ELT tools
  • Semantic layers (e.g. Cube or equivalent)
  • BI tools (Metabase, Looker, etc.)
  • Demonstrated experience using LLMs as part of a daily engineering workflow - not just casual use, but integrated into how you build, test, and ship.
  • Solid understanding of data modeling, metrics governance, and analytics enablement
  • Comfortable working in Git-based workflows and production environments
  • Strong ownership mindset and ability to work independently in a part-time setup


Nice to Have

  • Experience with product analytics platforms (e.g. Amplitude)
  • Familiarity with cost optimization and warehouse scaling strategies


Suitable candidates will be contacted within 10 days or less. We run a fast and lean recruitment process.


Similar Jobs

Explore other opportunities that match your interests

Data Scientist

Data Science
•
23m ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

affiliads agency

Germany

Business Intelligence Analyst

Data Science
•
25m ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Entry level

hj consulting

Germany

Senior Network Data Analyst

Data Science
•
34m ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Miratech

Ukraine

Subscribe our newsletter

New Things Will Always Update Regularly