Digital Factory - Data Engineer - Assistant Director

EY

EY

Software Engineering, Data Science

Luxembourg City, Luxembourg

Posted on May 12, 2026

Digital Factory - Data Engineer – Assistant Director

At EY, we’re all in to shape your future with confidence.

We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go.

Join EY and help to build a better working world.

The opportunity

The Data Engineer provides strategic, technical, and operational leadership for the design, delivery, and operation of enterprise‑grade data platforms that underpin EY’s digital products, analytics, and data‑driven transformation initiatives.

This role goes beyond hands‑on delivery to owning data engineering outcomes end‑to‑end, shaping platform strategy, setting standards, and leading teams to deliver scalable, reliable, secure, and compliant data solutions in highly regulated environments.

You will act as a trusted technical leader and advisor to Product, Architecture, Analytics, Data Science, and senior stakeholders, ensuring data platforms are fit for purpose, future‑proof, and aligned with EY’s broader technology and risk strategy.

Key Responsibilities

Data Platform Strategy & Technical Leadership

  • Define and drive the data engineering strategy and roadmap aligned with product, analytics, and enterprise architecture objectives.
  • Own the technical vision and target architecture for data platforms (lakehouse, warehouse, streaming, and event‑driven patterns).
  • Make and govern architecture decisions, balancing scalability, cost, performance, security, and regulatory requirements.
  • Act as the design authority for complex data pipelines and platforms, reviewing and approving solution designs.

Advanced Data Platform & Pipeline Engineering

  • Oversee the design and implementation of enterprise‑scale batch and streaming data pipelines, using Python and modern data frameworks.
  • Ensure robust ingestion, transformation, and serving layers across internal and external data sources (APIs, databases, files, events).
  • Drive the development of reusable data engineering frameworks, accelerators, and reference implementations.
  • Establish and enforce data contracts, schemas, versioning strategies, and documentation standards across teams.

Data Quality, Reliability & Operational Excellence

  • Set and own data quality, reliability, and observability standards across all managed data products.
  • Define and govern SLAs/SLOs for critical data assets, ensuring business‑critical use cases are protected.
  • Lead major incident management and root‑cause analysis, ensuring durable, systemic fixes rather than tactical workarounds.
  • Embed testing strategies (unit, integration, data quality, regression) as non‑negotiable engineering standards.

Performance, Scalability & Cost Governance

  • Ensure platforms and pipelines are designed for scale, resilience, and fault tolerance (idempotency, retries, checkpointing, backpressure).
  • Drive continuous optimization of compute usage, storage layouts, query performance, and costs.
  • Make data‑driven trade‑off decisions between performance, cost, complexity, and maintainability.

People Leadership & Team Enablement

  • Provide technical leadership, mentoring, and coaching to Data Engineers across multiple teams or initiatives.
  • Set clear expectations for engineering quality, delivery discipline, and professional development.
  • Support capacity planning, skill development, and succession planning within the data engineering capability.
  • Act as an escalation point for complex technical and delivery challenges.

Cross‑Functional & Stakeholder Collaboration

  • Partner closely with Product, Architecture, Backend Engineering, Analytics, and Data Science leads to translate strategic objectives into executable data solutions.
  • Engage senior stakeholders to explain technical trade‑offs, risks, and investment needs in clear, business‑aligned language.
  • Ensure data platform decisions support downstream consumption patterns (APIs, analytics, ML, operational use cases).

Security, Privacy, Risk & Compliance Ownership

  • Own the implementation and governance of secure‑by‑design data engineering practices (access controls, encryption, secrets management).
  • Ensure platforms comply with enterprise, regulatory, and privacy obligations (data classification, lineage, retention, auditability).
  • Act as a key contributor to audits, risk assessments, and data governance forums, representing the data engineering domain.

Operations, Governance & Continuous Improvement

  • Hold accountability for production data platforms, ensuring operational stability, availability, and ongoing improvement.
  • Reduce operational risk and toil through automation, standardization, and platform‑level capabilities.
  • Define, maintain, and evolve data engineering standards, guardrails, and best practices across the Digital Factory.
  • Contribute to broader technology governance and platform strategy at program or portfolio level.

Qualifications Required

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, Software Engineering, or a related discipline.
  • Extensive professional experience in data engineering, including leadership or solution‑ownership responsibilities.
  • Deep expertise in Python for large‑scale, production‑grade data engineering.
  • Strong command of SQL and data modeling (dimensional, normalized, lakehouse, or hybrid patterns).
  • Proven experience designing and operating enterprise data platforms (data lakes, warehouses, analytical stores).
  • Solid experience with CI/CD, automated testing, code quality, and engineering governance for data workloads.
  • Strong understanding of cloud platforms and data services (Azure strongly preferred).
  • Demonstrated ownership of data security, access control, observability, and operational reliability.

Preferred

  • Strong hands‑on and design experience with modern data ecosystems (e.g., Spark, Databricks, Airflow, dbt, Azure Data Factory).
  • Experience with streaming and event‑driven architectures (Kafka, Azure Event Hubs, Service Bus).
  • Prior ownership of lakehouse or enterprise data warehouse strategies.
  • Experience operating in financial services or other regulated environments.
  • Familiarity with infrastructure‑as‑code (Terraform, Bicep) and platform automation.
  • Proven success working across distributed, cross‑functional, and global teams.

At EY, we’ll develop you with future-focused skills and equip you with world-class experiences. We’ll empower you in a flexible environment, and fuel you and your extraordinary talents in a diverse and inclusive culture of globally connected teams. Learn more.

Are you ready to shape your future with confidence? Apply today.

To help create the best experience during the recruitment process, please describe any disability-related adjustments or accommodations you may need.

EY | Building a better working world

EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets.

Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow.

EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Our offer of employment is contingent upon the successful completion of a background check and pre-screening requirements. The candidate acknowledges that all information provided must be accurate.