TenneT
Geen max uurtarief
Gelderland
32 uur p/w
ICT Informatievoorziening
30ste juni, 2025
2de juli, 2025
The Digital & Data organization at TenneT is focused on driving innovation and leveraging digital technology to enhance data-driven decision-making across the company. As part of this mission, the organization has developed the TenneT Data Cloud (TDC), a modern cloud-based data platform built on Azure. This platform supports a wide range of data integration, processing, and analytics tasks, serving as the foundation for data initiatives across TenneT. Within this structure, DevOps teams play a central role, working closely with stakeholders to deliver high-quality, scalable, and reliable data solutions that meet the evolving needs of the business.
Function:
As a Cloud Data Platform Engineer in TenneT’s Digital & Data organization, you will be a crucial member of a DevOps team responsible for designing, implementing, and maintaining the TenneT Data Cloud (TDC) on Azure. Your role involves setting up and managing Azure services like Azure Data Factory, Azure Databricks, and Microsoft Fabric, ensuring seamless integration with various data sources and automating workflows to enhance efficiency. Additionally, you’ll monitor and optimize the performance of the TDC to uphold high standards of availability and reliability, staying current with the latest Azure technologies and best practices to continuously improve the platform.
Tasks and responsibilities:
• Design, develop, and implement scalable data solutions using Microsoft Azure services.
• Manage containerized applications with Azure Kubernetes Service (AKS).
• Build and maintain CI/CD pipelines to support efficient, automated deployment and testing of data engineering workflows.
• Develop and maintain data processing solutions using Python, Java, or other relevant programming languages.
• Ensure effective data storage, ingestion, transformation, and analytics leveraging Azure data services.
• Design, develop, and integrate APIs to facilitate seamless data exchange with external systems.
• Implement automated workflows and system integrations to streamline operations.
• Use Infrastructure as Code (IaC) tools to provision and manage cloud infrastructure on Azure.
• Design, build, test, deploy, and maintain applications with a focus on performance, fault tolerance, observability (logging and monitoring), and reliability.
• Write and maintain unit and integration tests to ensure code quality and reliability.
• Troubleshoot and resolve issues identified through testing or reported by users.
• Continuously identify opportunities to improve existing technical solutions and team practices.
• Actively participate in knowledge sharing, design discussions, and technical reviews within the team.
Profile:
• Bachelor’s in Computer Science, Engineering, or a related field (or equivalent practical experience).
• Extensive experience (min 7 years) with Microsoft Azure services, including but not limited to Azure Kubernetes Service (AKS), Azure Data Lake Storage, Azure Data Factory, and Azure Databricks (must-have).
• Proven track record of designing and deploying scalable, production-grade data pipelines and distributed data processing solutions.
• Strong proficiency in Databricks development, including notebook orchestration, Delta Lake, structured streaming, and performance optimization.
• Deep understanding of CI/CD practices and tools (e.g., Azure DevOps, GitHub Actions, Jenkins) for automating deployment and testing workflows.
• Advanced scripting and development skills in Python, Java, and SQL, with the ability to write clean, testable, and maintainable code.
• Experience provisioning and managing cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform or Bicep.
• Familiarity with building and integrating RESTful APIs for data access and interaction with external systems.
• Experience with automated workflows, event-driven architectures, and data integration pipelines.
• Solid understanding of data engineering principles including data modeling, ETL/ELT patterns, and data governance.
• Knowledge of big data technologies and frameworks such as Apache Spark, Kafka, and Hadoop.
• Strong analytical and problem-solving skills with the ability to debug and optimize complex systems in production.
• Excellent communication and interpersonal skills; ability to collaborate effectively in agile, cross-functional teams.
• High proficiency in English , Dutch is not mandatory.
Soft skills:
• Team player and communicative
• Proactive
• Open minded and flexible
• Ambitious and driven
• Involved and motivated
Conditions:
• At entry, TenneT performs a Pre-Employment Screening;
• Duty station for this position is officially Arnhem MCE | 2 x week in the office (team day on Thursday) and the rest hybrid.
• One interview with panel of 2 or 3 partners | Online via Teams.
Additional information:
• Suppliers must be aware of the laws and regulations regarding employment conditions and Tennet’s Collective Labour Agreement. This assignment is placed in scale 8.
• We would like to receive the personal motivation of the candidate and CV in English or Dutch.
N.v.t
Omdat het proces verloopt via een aanbesteding is het belangrijk dat je een goede kans maakt om de opdracht te winnen. Bij een match starten we het offertetraject, bij twijfel laten we dit binnen 1 werkdag weten.
De procedure verloopt via een aanbesteding. De eerste introductie doen wij daarom op papier.
Wij houden van eerlijk en transparant zaken doen.
Als je aan slag gaat via Bij Oranje hanteren we de volgende voorwaarden:
Wij houden van eerlijk en transparant zaken doen.
Als je aan de slag gaat via Bij Oranje Detachering dan hanteren we de volgende voorwaarden: