The Linux Foundation Projects
Skip to main content

At LF Energy Summit 2024 in Brussels, Christian Merz from Elia Group and Nico Westerbeck from InstaDeep delivered a presentation titled “A GPU-Native Approach on Tackling Grid Topology Optimization.” The session shed light on innovative methods to optimize grid topology using GPU-native solutions, addressing significant challenges faced by grid operators, particularly in relation to congestion and redispatch costs (full video follows at the end).

The Need for Topology Optimization

Merz and Westerbeck began by highlighting the urgency of addressing grid congestion, which has led to billions of euros in redispatch costs in recent years. With the increasing integration of renewable energy sources, the pressure on grid infrastructure has intensified. In 2022 alone, Germany faced around €4 billion in redispatch costs, with a significant portion of renewable energy being curtailed. In this context, effective grid topology optimization emerges as a critical solution to alleviate congestion and reduce redispatch needs.

Two primary short-term solutions were discussed:

  1. Redispatch: A costly method where power generation in congested areas is reduced and increased elsewhere.
  2. Topology Optimization: Reconfiguring grid connections to balance loads and avoid congestion – much like how Google Maps reroutes traffic around jams.

Key Topological Actions

The presenters identified three main topological actions used in grid topology optimization:

  1. Bus Bar Splitting: Separating bus bars in a substation to change how electricity flows.
  2. Bus Bar Reassignment: Altering the connections of loads and injections to bus bars.
  3. Switching Lines On/Off: Opening or closing specific transmission lines to redirect power flow.

Despite the potential benefits of these actions, the complexity of the grid makes finding optimal configurations extremely challenging. The sheer number of possible combinations (10^120) presents a formidable computational problem.

The GPU-Native Solution

To address this complexity, Merz and Westerbeck emphasized the importance of accelerating computations, particularly through GPU-native methods. Traditional load flow calculations, particularly the AC load flow, offer the best approximation of grid behavior but are computationally expensive. The team explored the DC load flow as a faster, albeit less precise, alternative. DC load flow involves solving a system of linear equations, which can be efficiently handled on GPUs.

In this approach, the key innovation is the use of Bus Split Distribution Factors (BSDF) and Transmission Line Switching (LF), which reduce complex topology changes to simple rank-one updates to an inverted matrix. This method enables rapid load flow calculations, significantly reducing computational time while maintaining acceptable accuracy.

Results and Benefits

By leveraging GPU acceleration, Merz and Westerbeck reported a remarkable improvement in the speed of load flow calculations. Their solution, using GPUs and the BSDF method, achieves seven orders of magnitude faster calculations compared to traditional approaches like PandaPower, which performs 100 load flows per second. The new method allows for millions of load flows per second, drastically reducing computational costs as well.

This speed and efficiency open new possibilities for grid planners, enabling them to explore a vast range of topological options and optimize the grid more effectively. In practice, this can lead to a 30% reduction in grid congestion, resulting in significant cost savings.

Future Developments

The team outlined their future goals, including integrating more complex constraints and working toward an industrial-scale solution by the end of 2024. They also aim to expand the applicability of their approach to cover voltage optimization and more advanced load flow calculations, such as AC load flows, while maintaining the GPU-native framework.

In collaboration with grid operators, the team plans to refine the optimization process, ensuring it is robust enough to handle real-world constraints. There is also an exciting potential to develop a broader ecosystem of GPU-based solvers to tackle a range of grid planning challenges.