Head of Group Data Delivery

Head of Group Data Delivery

Experiência

--

Tipo de Emprego

Full-time

Posição

--

Oferta Salarial

Descrição da Oferta de Emprego

Euronext is seeking a Head of Group Data Delivery to lead the development, maintenance, and expansion of the Group Data Platform.

Client Details

Euronext is the leading pan-European market infrastructure, shaping capital markets for future generations. Its mission is to connect European economies to global capital markets, to accelerate innovation and sustainable growth. Euronext is located in 18 countries across Europe, US and Asia, with regulated exchanges in Belgium, France, Ireland, Italy, the Netherlands, Norway and Portugal. The group has expanded organically and externally, with a revenue growing from €458 million in 2014 to €1.5 billion in 2022, with 2,200 employees and 55 nationalities.

Description

  • Group Data Infrastructure Management:
    Oversee the design, deployment, and maintenance of the transversal Group Data Platform, including data ingestion, transformation, and distribution layers in a Data Mesh / Data Fabrik setup
  • Data Marketplace and BI Solutions:
    Leveraging the concepts of Data Products, support and extend the Group Data Marketplace, ensuring smooth integration with BI platforms and the different Business Data Domain and facilitating data accessibility for business users.
  • Data Governance Implementation:
    Deploy and manage Data Governance systems, including data catalog, glossary, lineage, and Data Quality solutions, ensuring adherence to governance policies and best practices.
  • Team Leadership:
    Lead and mentor 2 local teams in charge of both the transversal data services (Dev and Devops) and a team of data engineers responsible for the delivery of Data project for business teams.
  • Cross-Functional Collaboration:
    Partner with the Head of Group Data Office and the Delivery managers of other CTO areas, to align data delivery initiatives with broader business strategies, leveraging Group Data Platform capabilities.
  • Platform Adoption and Training:
    Promote the adoption of the Group Data Platform through effective communication, training, and change management initiatives.
  • Compliance and Quality Assurance:
    Ensure all data delivery solutions comply with governance standards, data security, and quality requirements.
  • Control cost and delays :
    Manage planning and budget of the teams, to ensure the services are delivered according to the business development plans.
  • Continuous Improvement:

Identify opportunities to optimize data delivery processes, automate workflows, and enhance scalability and efficiency.

Profile

  • Data Architecture:

Extensive experience in designing and managing distributed and scalable data architectures, leveraging principles of Data Mesh and Data Fabric. Proficiency with cloud platforms such as AWS (S3, Glue, Redshift, Athena) or Azure (ADLS, Synapse).

Hands-on experience with containerization and orchestration tools like Docker and Kubernetes to deploy and scale data platforms.

  • Data Ingestion and Transformation:

Strong expertise in data ingestion and transformation tools: Apache Spark, Apache Kafka, and Flink for batch and streaming data processing, Talend, dbt, or Fivetran for building and automating data pipelines.

Knowledge of Lakehouse architectures using tools like Databricks, Delta Lake, Apache Iceberg or Apache Hudi.

  • Data Storage and Management:

Experience with storage solutions such as Amazon S3, Hadoop or Snowflake

  • Data Governance and security:

Experience implementing data cataloging and discovery platforms, such as Data Galaxy, Alation, or Amundsen, to enable efficient data exploration.

Implementing privacy and compliance and manage RBAC with tools such as Apache Ranger or IAM policies.

  • BI, Analytics, and Self-Service:

Proficiency with BI and visualization platforms, tools like Power BI, Tableau, or Metabase for delivering insights and reports.

Experience in creating self-service data platforms, building APIs for self-service data product consumption using OpenAPI, GraphQL, or RESTful APIs.

  • Automation and Observability:

Expertise in implementing data observability and monitoring tools like Monte Carlo, Sifflet, Datadog, or Prometheus for ensuring platform reliability and data health.

Experience automating workflows and orchestration with tools such as Airflow, Dagster, or Prefect.

  • Programming and Development:

Programming skills in Python, and SQL for data engineering tasks. Familiarity with software engineering practices, including CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI.

  • Data Science Tools:

Knowledge of platforms like Dataiku, KNIME or Jupyter Notebook

Job Offer

A good opportunity to ramp up a global project.

Candidate-se através do website