Skip to main content Skip to footer

Databricks Unified Data Analytics Platform Engineer

Taguig Job No. r00302696 Full-time

工作描述

Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.

职位要求

 

  • Minimum 1 year of experience is required
  • Develop high-quality, scalable ETL/ELT pipelines using Databricks technologies including Delta Lake, Auto Loader, and DLT.
  • Excellent programming and debugging skills in Python.
  • Strong hands-on experience with PySpark to build efficient data transformation and validation logic.
  • Must be proficient in at least one cloud platform: AWS, GCP, or Azure.
  • Create modular dbx functions for transformation, PII masking, and validation logic — reusable across DLT and notebook pipelines.
  • Implement ingestion patterns using Auto Loader with checkpointing and schema evolution for structured and semi-structured data.
  • Build secure and observable DLT pipelines with DLT Expectations, supporting Bronze/Silver/Gold medallion layering.
  • Configure Unity Catalog: set up catalogs, schemas, user/group access, enable audit logging, and define masking for PII fields.
  • Enable secure data access across domains and workspaces via Unity Catalog External Locations, Volumes, and lineage tracking.
  • Access and utilize data assets from the Databricks Marketplace to support enrichment, model training, or benchmarking.
  • Collaborate with data sharing stakeholders to implement Delta Sharing — both internally and externally.
  • Integrate Power BI/Tableau/Looker with Databricks using optimized connectors (ODBC/JDBC) and Unity Catalog security controls.
  • Build stakeholder-facing SQL Dashboards within Databricks to monitor KPIs, data pipeline health, and operational SLAs.
  • Prepare GenAI-compatible datasets: manage vector embeddings, index with Databricks Vector Search, and use Feature Store with MLflow.
  • Package and deploy pipelines using Databricks Asset Bundles through CI/CD pipelines in GitHub or GitLab.
  • Troubleshoot, tune, and optimize jobs using Photon engine and serverless compute, ensuring cost efficiency and SLA reliability.
  • Experience with cloud-based services relevant to data engineering, data storage, data processing, data warehousing, real-time streaming, and serverless computing.
  • Hands on Experience in applying Performance optimization technique
  • Understanding data modeling and data warehousing principles is essential

Good to Have:

  • Certifications: Databricks Certified Professional or similar certifications.
  • Machine Learning: Knowledge of machine learning concepts and experience with popular ML libraries.
  • Knowledge of big data processing (e.g., Spark, Hadoop, Hive,Kafka)
  • Data Orchestration: Apache Airflow.
  • Knowledge of CI/CD pipelines and DevOps practices in a cloud environment.
  • Experience with ETL tools like Informatica, Talend, Matillion, or Fivetran.
  • Familiarity with dbt (Data Build Tool)

 

#LI-PH

更多了解埃森哲

我们的专长

我们秉承“科技融灵智,匠心承未来”的企业使命,致力于通过引领变革创造价值,为我们的客户、员工、股东、合作伙伴与整个社会创造美好未来。

认识我们的团队

从业务服务部门到各个行业领域, 从职场新人到卓越领袖,我们一直在运用科技创造非凡!

联系我们

加入我们的团队

搜索与你的技能和兴趣匹配的空缺职位。我们希望招聘充满激情、求知若渴、富有创意、专注于解决方案且喜欢团队合作的员工。

埃森哲职位博客

关注埃森哲职业博客,在职场中先人一步,从真正的业内人士处,获取职业建议、内部观点以及可以即学即用的行业真知。