Azure Data Platform Operations Engineer

Ascendum Solutions • United State
Remote
Apply
AI Summary

We are seeking an Azure Data Platform Operations Engineer to automate operations, design, and implementation of Azure-based data platforms. The successful candidate will be responsible for automating the operations, design, implementation, and maintenance of our Azure-based data platforms. This role requires expertise in creating reproducible, scalable, and secure environments using infrastructure as code methodologies.

Key Highlights
Platform Automation and Infrastructure as Code
Platform Administration
Operational Support
Key Responsibilities
Design and implement infrastructure as code (IaC) using tools like Terraform to manage and provision Azure resources systematically and efficiently.
Automate the setup, scaling, and management of Databricks workspaces and other Azure data services using CI/CD pipelines.
Develop scripts and tools to automate operational tasks, reduce manual interventions and ensure consistent environments through code.
Technical Skills Required
Terraform Databricks Azure PaaS PowerShell Python Bash CI/CD tools Docker Kubernetes
Benefits & Perks
Fully remote
11+ months contract
Nice to Have
Experience with CI/CD tools (e.g., Jenkins, Azure DevOps) for automating data and infrastructure workflows.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes) within Azure.

Job Description


  • Candidates should be eligible to work for any employer in the United States without needing Visa sponsorship
  • Fully remote, must be willing to work PST time zone.
  • CANNOT consider C2C candidates


Job Title: Azure Data Platform Operations Engineer

Duration: 11+ months contract


Job Description

We are seeking an Azure Data Platform Operations Engineer with a strong focus on automation and infrastructure as code to join our dynamic team. The successful candidate will be responsible for automating the operations, design, implementation, and maintenance of our Azure-based data platforms, including Databricks and other related Azure services. This role requires expertise in creating reproducible, scalable, and secure environments using infrastructure as code methodologies, particularly within the Azure cloud ecosystem.


Responsibilities:

Platform Automation and Infrastructure as Code:

  • Design and implement infrastructure as code (IaC) using tools like Terraform to manage and provision Azure resources systematically and efficiently.
  • Automate the setup, scaling, and management of Databricks workspaces and other Azure data services using CI/CD pipelines.
  • Develop scripts and tools to automate operational tasks, reduce manual interventions and ensure consistent environments through code.


Platform Administration:

  • Administer, configure, and optimize the Databricks platform and other Azure data services to support data analytics, machine learning, and data engineering activities.
  • Manage Azure resources including Data Factories, Key Vaults, Storage Accounts, ML Workspaces, and Purview through automated scripts and IaC.
  • Monitor, troubleshoot, and optimize resource utilization and platform costs using automated tools and techniques.


Operational Support:

  • Ensure platform security, compliance, and reliability through automated patching, updates, and security configuration.
  • Implement and manage access controls, security policies, and monitoring solutions to protect sensitive data across all Azure data platforms.


Collaboration and Technical Support:

  • Work closely with data engineering, data science/ML, and application/integration teams to automate data workflows and optimize data delivery.
  • Provide expert guidance on Azure best practices, especially regarding automation, scalability, and operational efficiency.
  • Collaborate with DevOps and other technology teams to enhance infrastructure automation and continuous delivery efforts.


Data Management:

  • Automate the management of schema data with Unity Catalog, including creation, configuration, cataloging, external storage, and access permissions.
  • Develop and maintain automated solutions for data cataloging, metadata management, and governance to support data discovery and lineage tracking.


Skills and Qualifications:

Essential:

  • 3+ years of hands-on experience with the Databricks platform and Azure PaaS.
  • 3+ years of experience in infrastructure as code, preferably with Terraform, in a production environment.
  • Strong knowledge of Azure cloud services and management of scalable cloud infrastructure.
  • Proficient in scripting languages such as PowerShell, Python, or Bash for automation purposes.
  • Solid understanding of security best practices, compliance frameworks, and risk management in the cloud.
  • Proficiency in software development with Python


Preferred:

  • Experience with CI/CD tools (e.g., Jenkins, Azure DevOps) for automating data and infrastructure workflows.
  • Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes) within Azure.
  • Knowledge of real-time data processing technologies (e.g., Spark Streaming, Delta Live Tables, etc...).

Similar Jobs

Explore other opportunities that match your interests

DevOps/Platform Engineer

Devops
•
1m ago
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Mid-Senior level

The Judge Group

United State

Data Engineer

Devops
•
1h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Dune

United State
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

CBTS

United State

Subscribe our newsletter

New Things Will Always Update Regularly