Databricks Developer

paves technologies • San Angelo Area
Visa Sponsorship
Apply
AI Summary

Paves Technologies is seeking an experienced Databricks Developer with strong proficiency in PySpark, Azure, and enterprise-scale ETL pipelines. The ideal candidate brings deep foundational data engineering experience (5+ years) while being eager to adopt emerging Databricks capabilities.

Key Highlights
Design, develop, and optimize scalable ETL pipelines and data workflows using Azure Databricks, Azure Data Factory, and other Azure services.
Manage Delta Lake tables, schema evolution, time travel, performance tuning, and advanced data optimization techniques.
Implement CI/CD automation using Azure DevOps, including Databricks Asset Bundles (DAB) and environment-based deployments.
Technical Skills Required
Azure Databricks PySpark Python SQL Spark SQL Delta Lake ETL & Data Integration Data Warehousing Data Pipeline Architecture Azure Data Factory CI/CD using Azure DevOps & Databricks Asset Bundles (DAB) Unity Catalog LakeFlow
Benefits & Perks
Hybrid work mode (4 days in office)
H-1B sponsorship available for qualified candidates applying for the Atlanta, USA location

Job Description


Company Description

Paves Technologies specializes in leveraging Artificial Intelligence, Generative AI, and Agentic AI to offer cutting-edge technology solutions and outsourcing services. We focus on enabling businesses to achieve scalable growth, improved efficiency, and a competitive advantage in an AI-driven world. Our expertise spans automation and augmentation to address the evolving needs of modern organizations. At Paves Technologies, we are committed to driving innovation and excellence for our clients.


Role Description

Job Title: Databricks Developer

Qualified candidates send your resumes to

1)  hr@pavestechnologies.com

2)  contact@pavestechnologies.com and

3)  varshinya.ukkusuri@pavestechnologies.com


Locations: Atlanta, USA

Work Mode: Hybrid (4 days in office)


Joining: Priority will be given to candidates who can join within 15 days.

Note: H-1B sponsorship is available for qualified candidates applying for the Atlanta, USA location.


About the Role

Paves Technologies is seeking an experienced Databricks Developer with strong proficiency in PySpark, Azure, and enterprise-scale ETL pipelines. The ideal candidate brings deep foundational data engineering experience (5+ years) while being eager to adopt emerging Databricks capabilities such as Unity Catalog, LakeFlow, and Databricks Asset Bundles (DAB).

This position involves close collaboration between global teams in Atlanta and Hyderabad.

Key Responsibilities

  • Design, develop, and optimize scalable ETL pipelines and data workflows using Azure Databricks, Azure Data Factory, and other Azure services.
  • Build and maintain Spark jobs, Delta Lake pipelines, complex transformations, and Databricks notebooks using Python, SQL, and Spark SQL.
  • Manage Delta Lake tables, schema evolution, time travel, performance tuning, and advanced data optimization techniques.
  • Implement CI/CD automation using Azure DevOps, including Databricks Asset Bundles (DAB) and environment-based deployments.
  • Ensure robust data integrity, validation, and data quality checks, ensuring zero-defect production deployments.
  • Work with emerging Databricks features including Unity Catalog, LakeFlow, and Catalog Federation.
  • Note: Experience with these newer capabilities is preferred but not required; strong capacity to learn is key.
  • Handle complex data ingestion, schema inference, malformed records, error handling, parallelization, and join optimization.
  • Optimize ETL pipelines for scalability, cost efficiency, automation, and peak production performance.


Qualifications

  • Bachelor’s degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in data engineering or similar roles.
  • Proven expertise with Azure Databricks, Spark, PySpark, and distributed data processing.
  • Strong understanding of ETL design, data warehousing, and cloud data engineering best practices.
  • Hands-on proficiency with Python, SQL, and Spark transformations.
  • Strong analytical, debugging, and performance optimization skills.
  • Excellent communication and collaboration skills across distributed teams.

Technical Skills

  • Azure Databricks
  • PySpark / Python
  • Apache Spark
  • SQL / Spark SQL
  • Delta Lake
  • ETL & Data Integration
  • Data Warehousing
  • Data Pipeline Architecture
  • Azure Data Factory
  • CI/CD using Azure DevOps & Databricks Asset Bundles (DAB)
  • Unity Catalog, LakeFlow (preferred but not mandatory)
  • Cloud Architecture & Optimization
  • Performance Tuning

Qualified candidates send your resumes to

1)  hr@pavestechnologies.com

2)  contact@pavestechnologies.com and

3)  varshinya.ukkusuri@pavestechnologies.com


Similar Jobs

Explore other opportunities that match your interests

Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

jnd inc.

United State

Entry-Level Data Analyst

Data Science
•
5h ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Entry level

ATC

United State

Data Scientist (Associate or Experienced)

Data Science
•
6h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Boeing

United State

Subscribe our newsletter

New Things Will Always Update Regularly