Lead large-scale Oracle-to-Databricks migration, design and implement cloud-native data platforms, and develop reusable PySpark frameworks.
Key Highlights
Key Responsibilities
Technical Skills Required
Benefits & Perks
Nice to Have
Job Description
Position Title: Senior Databricks Architect
Location : Detroit MI – 5 days Onsite
Duration : 24+ Months
We are looking for Visa Independent candidate.
We are open for Relocation as Well
Job Overview
We are looking for a highly senior, deeply hands on Databricks Architect to lead a large‑scale Oracle‑to‑Databricks migration, covering schema migration, code conversion, and ODI job modernization. The ideal candidate has extensive experience building enterprise-grade data platforms on Databricks, has executed at least one greenfield Databricks implementation, and is exceptionally strong in PySpark, Spark SQL, framework development, and Databricks Workflows.
Looking to advance your Devops career with relocation support? Explore Devops Jobs with Relocation Packages that include comprehensive packages to help you move and settle in your new role.
Key Responsibilities
- Architect, design, and implement cloud-native data platforms using Databricks (ingestion → transformation → consumption).
- Lead the full Oracle → Databricks migration including schema translation, ETL/ELT logic modernization, and ODI job replacement.
- Develop reusable PySpark frameworks, data processing patterns, and orchestration using Databricks Workflows.
- Build scalable, secure, and cost‑optimized Databricks infrastructure and data pipelines.
- Collaborate with business and technical stakeholders to drive data modernization strategy.
- Establish development best practices, coding standards, CI/CD, and DevOps/DataOps patterns.
- Provide technical mentorship and create training plans for engineering teams.
- Contribute to building MLOps and advanced operations frameworks.
Required Qualifications
Discover our full range of relocation jobs with comprehensive support packages to help you relocate and settle in your new location.
- 14+ years in Data Engineering/Architecture with at least 4+ years hands-on Databricks experience delivering end‑to‑end cloud data solutions.
- Strong experience migrating from Oracle/on‑prem systems to Databricks, including SQL, PL/SQL, ETL logic, and ODI pipelines.
- Deep hands-on expertise in:
- PySpark, Spark SQL, Delta Lake, Unity Catalog
- Building reusable data frameworks
- Designing high‑performance batch and streaming pipelines
- Proven experience with greenfield Databricks implementations.
- Strong understanding of cloud-native architectures on AWS and modern data platform concepts.
- Solid knowledge of data warehousing, columnar databases, and performance optimization.
- Good understanding of Agile/Scrum development processes.
- Bonus: Experience designing Data Products, Data Mesh architectures, Data Vault or enterprise data governance models.
- Good Understanding of Golde Gate.
Similar Jobs
Explore other opportunities that match your interests
Jobs via Dice
Jobs via Dice
DevOps Engineer