Skip to main content Skip to footer

Data Platform Engineer

Bengaluru Job No. atci-5210856-s1911346 Full-time

工作描述

Project Role : Data Platform Engineer
Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models.
Must have skills : Databricks Unified Data Analytics Platform
Good to have skills : NA
Minimum 15 year(s) of experience is required
Educational Qualification : 15 years full time education

Summary:
As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture strategy. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives.

AI Powered Tech Talent

Roles & Responsibilities:
- Expected to be a Subject Matter Expert with deep knowledge and experience.
- Should have influencing and advisory skills.
- Responsible for team decisions.
- Engage with multiple teams and contribute on key decisions.
- Expected to provide solutions to problems that apply across multiple teams.
- Facilitate workshops and discussions to gather requirements and feedback from stakeholders.
- Continuously evaluate and improve data architecture practices to enhance efficiency and effectiveness.

Professional & Technical Skills:
Design and build complex pipelines using Delta Lake, Auto Loader, Delta Live Tables (DLT), and deployment using Asset Bundles.
Proven experience as a Data Architect and Data Engineer leading enterprise-scale Lakehouse initiatives.
Expert-level understanding of modern Data & Analytics Architecture patterns including Data Mesh, Data Products, and Lakehouse Architecture.
Excellent programming and debugging skills in Python.
Strong experience with PySpark for building scalable and modular ETL/ELT pipelines.
Architect data ingestion and transformation using DLT Expectations, modular Databricks Functions, and reusable pipeline components.
Must have hands-on expertise in at least one major cloud platform: AWS, GCP, or Azure.
Lead implementation of Unity Catalog: create catalogs, schemas, role-based access policies, lineage visibility, and data classification tagging (PII, PHI, etc.).
Guide organization-wide governance via Unity Catalog setup: workspace linkage, SSO, audit logging, external locations, and Volume access.
Enable cross-platform data access using Lakehouse Federation, querying live from externally hosted databases.
Leverage and integrate Databricks Marketplace to consume high-quality third-party data and publish internal data assets securely.
Experience with cloud-based services relevant to data engineering, data storage, data processing, data warehousing, real-time streaming, and serverless computing.
Govern and manage Delta Sharing for securely sharing datasets with external partners or across tenants.
Design and maintain PII anonymization, tokenization, and masking strategies using dbx functions and Unity Catalog policies to meet GDPR/HIPAA compliance.
Architect Power BI, Tableau, and Looker integration with Databricks for live reporting and visualization over governed datasets.
Build Databricks SQL Dashboards to enable stakeholders with real-time insights, KPI tracking, and alerts.
Hands on Experience in applying Performance optimization techniques
Lead cross-functional initiatives across data science, analytics, and platform teams to deliver secure, scalable, and value-aligned data products.
Provide thought leadership on adopting advanced features like Mosaic AI, Vector Search, Model Serving, and Databricks Marketplace publishing.
Working knowledge of DBT (Data Build Tool) is a plus.
Strong background in data modeling and data warehousing concepts is required.

Additional Information:
The candidate should have minimum 15 years of experience in Databricks Unified Data Analytics Platform.
Certifications: Databricks Certified Professional or similar certifications.
Machine Learning: Knowledge of machine learning concepts and experience with popular ML libraries.
Knowledge of big data processing (e.g., Spark, Hadoop, Hive,Kafka)
Data Orchestration: Apache Airflow.
Knowledge of CI/CD pipelines and DevOps practices in a cloud environment.
Experience with ETL tools like Informatica, Talend, Matillion, or Fivetran.
Familiarity with DBT (Data Build Tool)

Education Qualification:
- A 15 years full time education is required.

职位要求

15 years full time education

更多了解埃森哲

我们的专长

我们秉承“科技融灵智,匠心承未来”的企业使命,致力于通过引领变革创造价值,为我们的客户、员工、股东、合作伙伴与整个社会创造美好未来。

认识我们的团队

从业务服务部门到各个行业领域, 从职场新人到卓越领袖,我们一直在运用科技创造非凡!

联系我们

加入我们的团队

搜索与你的技能和兴趣匹配的空缺职位。我们希望招聘充满激情、求知若渴、富有创意、专注于解决方案且喜欢团队合作的员工。

埃森哲职位博客

关注埃森哲职业博客,在职场中先人一步,从真正的业内人士处,获取职业建议、内部观点以及可以即学即用的行业真知。