Senior Data Architect

nu-pie analytics • India
Remote
Apply
AI Summary

We are seeking a highly experienced Data Architect to design scalable data platforms and drive implementation independently in complex environments. The ideal candidate will bring strong leadership, deep technical expertise, and a bias for action. Key responsibilities include owning end-to-end data architecture design, leading and executing design of scalable data platforms, and developing and maintaining enterprise data models.

Key Highlights
Design scalable data platforms
Drive implementation independently
Develop and maintain enterprise data models
Key Responsibilities
Own end-to-end data architecture design
Lead and execute the design of scalable, performant data platforms on cloud infrastructure
Develop and maintain enterprise data models
Technical Skills Required
ETL/ELT and SQL coding using cloud-based database solutions Design and develop star schema data models Proficiency in programming languages such as Python, Java
Benefits & Perks
Remote work
Contract duration: 6 months
Nice to Have
Experience with data orchestration tools such as Apache Airflow, Azure Data Factory, or AWS Glue
Exposure to real-time/streaming data architectures (Kafka, Spark Streaming, etc.)

Job Description


Job Title: Data Architect

Experience: 13 to 15 Years

Contract Duration: 6 Months

Work Mode: Remote

Work Timing: US Timing (10:30 PM IST cutoff for overlap)

Location: Remote / Open

Role Overview

We are seeking a highly experienced Data Architect with 7–10 years of relevant, proven architecture ownership who can hit the ground running from day one. This role demands an execution-focused architect who is equally comfortable designing scalable data platforms and driving implementation independently in complex, ambiguous environments. The ideal candidate will bring strong leadership, deep technical expertise, and a bias for action.


Mandatory Skills — Technical

•      ETL/ELT and SQL coding using cloud-based database solutions such as Azure SQL, Synapse, Redshift, Snowflake, or similar tools.

•      Design and develop star schema data models, and ETL/ELT jobs, to support use cases across typical business domains.

•      Design and develop using best-practice techniques across data modelling, table-driven control of transformation jobs, and dynamic ETL/ELT jobs that scale for expanding use cases.

•      Establish and maintain reporting dashboard solutions for enterprise-wide and executive analytics, including maintenance of back-end transformation layers and data pipelines.

•      Proficiency in programming languages such as Python, Java, or similar.


Key Responsibilities

•      Own end-to-end data architecture design from conceptualization to deployment.

•      Lead and execute the design of scalable, performant data platforms on cloud infrastructure.

•      Develop and maintain enterprise data models, including star and snowflake schemas.

•      Build and optimize ETL/ELT pipelines for high-volume, complex data environments.

•      Collaborate with business stakeholders to translate requirements into robust data solutions.

•      Drive best practices in data modelling, pipeline orchestration, and data quality assurance.

•      Establish governance standards for data assets, metadata, and documentation.

•      Support and mentor junior data engineers and analysts as needed.

•      Provide executive-level reporting and analytical dashboards for business insights.

•      Proactively identify and resolve data architecture bottlenecks and inefficiencies.


Required Qualifications

•      13–15 years of overall experience in data engineering, data warehousing, or data architecture.

•      7–10 years of proven hands-on data architecture ownership in enterprise environments.

•      Hands-on expertise with cloud data platforms: Azure Synapse, Azure SQL, Snowflake, Amazon Redshift, or equivalent.

•      Strong SQL coding skills for complex transformations, performance tuning, and schema design.

•      Demonstrated experience with ETL/ELT tools and frameworks.

•      Proficiency in Python and/or Java for data pipeline development.

•      Experience with enterprise reporting and BI tools (e.g., Power BI, Tableau, or similar).

•      Deep understanding of dimensional modelling, data vault, and relational design patterns.

•      Ability to work independently in ambiguous, fast-paced environments with minimal supervision.


Preferred Qualifications

•      Experience with data orchestration tools such as Apache Airflow, Azure Data Factory, or AWS Glue.

•      Exposure to real-time/streaming data architectures (Kafka, Spark Streaming, etc.).

•      Familiarity with DevOps / DataOps practices including CI/CD pipelines for data.

•      Experience working with cross-functional, globally distributed teams.

•      Cloud certifications (Azure, AWS, GCP) are a plus.


Engagement Details

Contract Typ: Contract — 6 Months

Work Hours: US Business Hours (workday ends at 10:30 PM IST to allow overlap for calls)

Work Mode: Fully Remote

Start: Immediate / ASAP

Note: Candidates must be available to attend calls/meetings during US business hours. The work schedule is structured to accommodate a meaningful overlap window ending at 10:30 PM IST.


Similar Jobs

Explore other opportunities that match your interests

Principal DevOps/SRE Engineer

Devops
•
13h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

ELSA, Corp

India
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Mid-Senior level

TalentBridge

India
Visa Sponsorship Relocation Remote
Job Type Part-time
Experience Level Not Applicable

quik hire staffing

India

Subscribe our newsletter

New Things Will Always Update Regularly