The Grid Doesn’t Wait for a Requirements Document
Hugo Pfister, Manager Grid Security Applications, TenneT Netherlands
TL;DR:TenneT shows that embedding software development directly within grid operations, rather than separating “business” and IT, enables faster iteration, better use of domain knowledge, and more effective tooling. Built on the open source PowSyBl framework, this model improves performance while leveraging shared infrastructure and community-driven development.
The Grid Doesn’t Wait for a Requirements Document
TenneT is replacing its grid security analysis tooling with software development directly in the operations team. The engineers with first-hand operational knowledge are building on PowSyBl, an open source simulation framework originally developed by RTE and hosted at LF Energy. The result is a tenfold improvement in analysis performance. This article explains the organisational model behind that outcome and what it means for transmission system operators (TSOs) and distribution system operators (DSOs) considering a similar shift.
The Problem with Separating “Business” from “IT” in Power Systems
In most industries, the gap between domain experts and software developers can be bridged through requirements engineering, agile sprints, and product management. Imperfectly, but well enough.
In power systems, that gap is structural and expensive.
Eric von Hippel at MIT describes the problem as “sticky information”: knowledge so bound to its context that it cannot be transferred without catastrophic loss. In a TSO or DSO, that knowledge is everywhere. It lives in the physics of how a specific network behaves under various conditions, in the operational constraints an operator knows by experience and intuition, in the difference between a simulation artefact and a real grid anomaly. Only someone inside the operation can reliably tell which is which.
When you try to hand that knowledge across an organisational boundary to a software team operating at arm’s length, something essential gets lost. The latency between an operational insight and its codification translates directly into slower iteration, higher maintenance costs, and tools that are technically complete but operationally suboptimal.
The answer is not to get better at writing requirements. The answer is to dissolve the boundary.
What This Looks Like in Practice
At TenneT, the PowSyBl deployment succeeded because the developers were not building for the grid security team. They were part of it. Operational questions and engineering answers were in the same room, often in the same conversation. When a simulation produced an unexpected result, the person who understood why it was unexpected and the person who could fix it were, more often than not, the same person.
Good product teams work this way as a matter of course. Utilities have been slow to get there, largely because the prevailing model still treats software development as an IT function rather than an operational capability. Software built by people embedded in the operation evolves with it. Software built at arm’s length calcifies around the requirements that were written down the day it was specified.
The same logic applies whether you are a transmission operator running contingency analysis or a distribution operator managing load forecasting and congestion. The operational knowledge that makes software genuinely useful is never fully captured in a requirements document. It lives with the engineers who work the problem every day.
It is important to immediately recognise that this model does not scale easily. Only a limited number of people can work within the daily operational reality and build the required domain knowledge. This forces strong prioritisation of what is built and maintained. Within that boundary, however, a level of quality and agility can be achieved that is hard to match in a traditional organisational setup.
What Central IT Should Actually Do
None of this is an argument against central IT. It is an argument about what central IT should be optimising for.
When innovation lives within the business, central IT’s job is to be invisible in the best possible sense: infrastructure so reliable and standardised that embedded development teams never have to think about it. Platform engineering, security compliance, shared tooling standards, CI/CD pipelines. The “plumbing” that makes local innovation viable.
The failure mode is when central IT mistakes control for value. Pushing proprietary or heavily modified tooling because it is “internally approved”, while the developer community has already converged on something better, creates technical debt immediately and slows down the teams central IT exists to enable.
Google’s DORA (DevOps Research and Assessment) research is unambiguous here: autonomy in tooling is a direct predictor of software delivery performance. Managing the tension between developer autonomy and compliance is precisely where a mature central IT function earns its place.
Ultimately, nothing the business builds should stand on its own. It is created on top of shared platform capabilities and integrated with the broader IT landscape. When the business is able to build and maintain mission-critical software, that is not a workaround or a risk. It is a validation of central IT. It demonstrates that IT has successfully abstracted complexity, standardised foundations, and enabled this way of working.
Where Shared Infrastructure Fits
Every grid operator and vendor ends up spending engineering time on foundational capabilities that are not differentiating. That time and budget could go toward the tooling that actually sets you apart.
Over the past decade, LF Energy has become where that foundational layer gets built, together. PowSyBl is one example, originally contributed by RTE, now the framework multiple TSOs build on, including TenneT. Others in the same community are doing the same with SEAPATH for digital substation infrastructure. The model is consistent: shared foundations, engineering effort directed at what actually differentiates you.
When TenneT’s grid security team builds on PowSyBl, we are building on a framework already running in production at RTE, Baltic RCC and others. The operational credibility is shared. The maintenance burden is shared. And the roadmap is governed by those who depend on it, informed by a broad practitioner-led community rather than any single team’s priorities.
That last point is worth dwelling on. Using open source code in isolation is one thing. Being inside the community that governs it is something different. When a regulation changes, or a new interoperability requirement emerges, or a security issue surfaces, you are in the room where the response gets shaped, not waiting for a vendor to release a patch.
Where to Start
Making this shift does not require a reorganisation. It requires one tactical decision: where does the next software capability live? When your grid operations team identifies a tooling gap, ask the question plainly. A simulation that takes too long. Data that has to become information. An operational process still running on spreadsheets and email. Should the solution come from IT, or from engineers already inside the operational reality?
The organisational model then follows from the work, not the other way around.
Start with an initiative that solves a problem you have today. The peer group is already there, the infrastructure is production-grade, and the community will meet you wherever you are on the journey. From there, demonstrate value in practice to build credibility, both with business stakeholders, where this model can be difficult to sell without tangible proof, and with IT by showing that what is built meets or exceeds their standards. In parallel, actively involve IT from the outset and make them co-owners of the initiative. Engage solution architects and other key stakeholders early, so they become part of the build process and evolve into ambassadors for the approach.
Hugo Pfister is Manager Grid Security Applications at TenneT Netherlands.
Read the TenneT/PowSyBl case study here.
About PowSyBl
PowSyBl (Power Systems Blocks) is an open source framework written in Java for modelling power systems, performing power flow calculations, running contingency analyses, and supporting capacity calculation. Originally developed by RTE, the French transmission system operator, PowSyBl is hosted at LF Energy under neutral open governance. It is used in production by multiple European TSOs and regional coordination centres. Learn more at powsybl.org.
About TenneT
TenneT is a leading European electricity transmission system operator with its main activities in the Netherlands and Germany. TenneT operates approximately 25,000 kilometres of high-voltage connections and serves around 43 million end-users. TenneT is committed to developing a sustainable and reliable energy supply and is at the forefront of integrating large-scale renewable energy into the transmission grid. Learn more at tennet.eu.
About LF Energy
LF Energy is the neutral home for collaborative development of open source software, standards, and data for the energy sector. Hosted by the Linux Foundation, LF Energy supports more than three dozen projects spanning the full breadth of grid modernisation, from transmission modelling and substation digitalization to EV charging and distributed energy resource management. Its members include utilities, vendors, researchers, and technology providers working together to deliver affordable, reliable, safe, and clean energy.
Learn more at lfenergy.org.
AI Disclosure
This post used artificial intelligence tools for research, structural assistance, or grammatical refinement. The final content was reviewed, edited, and validated by human contributors to LF Energy to ensure accuracy and alignment with our community standards. We remain committed to transparency in the use of generative technologies within the open source ecosystem.