Multi-objective cost function minimization can be used to improve the performance of a hydroelectric powerhouse, resulting in production of the most power for the least cost.
By Michael Curtiss, Jonas Parker, and Parker Scoggins
Optimization is the process by which a system is tuned to create the best output for a given input. For hydroelectric plants, optimization means using less water to produce the desired amount of power.
In the 1980s, the Bonneville Power Administration, U.S. Army Corps of Engineers and other federal agencies collaborated to define five “types” of optimization pertaining to hydropower. These are as listed below:
– Type 1: Optimization of an individual unit in terms of power output per amount of flow at constant head;
– Type 2: Coordination of generating units to achieve a powerhouse output setpoint using the least amount of flow. This level of optimization is achieved with the best possible unit selection and load sharing;
– Type 3: Coordination of all the dams and hydroelectric powerhouses in a river basin or watershed;
– Type 4: Coordination of multiple river basins and watersheds in a region; and
– Type 5: Coordination of all of a region’s various generating resources (including thermal, wind, etc.).
It is important to understand that each successive optimization type relies on the type before it. Without first achieving Type 1 optimization, the accuracy of Type 2 optimization cannot be guaranteed. Type 1 optimization is achieved by index testing a hydro turbine to determine its efficiency profile. Methods vary based on turbine type and powerhouse construction. Essentially, a table of power vs. flow relationships is assembled, usually at three levels of head: high, medium and low.
For fixed-blade turbines, this requires collecting a single data point for each value of flow rate. But Kaplan turbines require collecting a data point for several blade angles at each value of flow rate, to determine the blade angle that provides maximum efficiency for a given flow rate, for a given head. These peak efficiencies can then be used to develop an efficiency profile for the Kaplan unit that is comparable to that of a fixed-blade unit.
This article focuses on Type 2 optimization, which concerns unit selection and load sharing in the whole powerhouse (see Figure 1). The authors will discuss a rapid method for achieving Type 2 optimization with a new, very fast computer algorithm. This algorithm is an essential building block for higher levels or types of optimization problems.
Understanding Type 2 optimization
To appreciate the value of Type 2 hydropower optimization, it is important to understand that every turbine in a powerhouse is slightly different. Although all the turbines in the powerhouse may have been specified as the same design, differences during the fabrication process result in unit operating efficiency differences of 1% to 5%. Years of operation tend to exacerbate these differences. Every hydro turbine has an efficiency profile that varies across its range of flow (see Figure 2).
In a modern hydroelectric powerhouse, the performance profiles of each turbine will be available in a stored database. When a power setpoint is chosen, the powerhouse operator can choose to have some turbines on and some off, a process called unit selection. The next parameter to determine is load sharing. The best load sharing is achieved when the wicket gate openings for all turbines are set individually, such that the power output is met using the minimum overall flow rate.
Both decisions are more difficult to determine than they might seem. A skilled hydroelectric plant operator may lose up to 5% of the water available for use at the dam to less-than-ideal unit selection and load sharing. During conditions where water must be spilled over the dam, that 5% may not make a lot of difference. However, when the flow of the river is insufficient to meet the net energy demand, this 5% can represent a significant corresponding loss of revenue.
Take for example the Federal Columbia River Power System, which has a total generating capacity of 22,060 MW. “In good water years, the Columbia River Basin hydro system can produce about 18,000 average megawatts of electricity, and in poor water years, as little as 11,700 average megawatts.”1
Consider the possibility that at a minimum the proposed algorithm generates a system-wide, average improvement of 1%. If the Federal Columbia River Power System’s collective powerhouse efficiency were 88%, the system would now be operating in the 89% efficiency range.
The below equation quantifies the improved output from this increased efficiency:
Improved Output = (0.89 / 0.88) x 11,700 MWavg = 11,833 MWavg
Put in terms of energy, this is: (11,833 – 11,700) x 24 x 365 MWh/yr, or about 1,165 GWh/yr of additional, free energy. This would garner additional annual revenue of about $21,117,045, assuming an average market rate of $25/MWh.
Methods for load sharing and unit selection
Below the authors discuss five methods for load sharing and unit selection and elaborate on the advantages and drawbacks of each.
Brute force method for load sharing and unit selection
The slowest way to achieve Type 2 optimization is the “brute force” method. This method examines every possible load allocation (and thus, every possible unit selection) among the available turbine-generating units, and it requires a staggering number of calculations to achieve a solution. For example, if there are 10 units in a powerhouse and 100 power increments for each turbine, the unit selection and load sharing will be solved factorially as shown in Equation 2 below:
Iterations = 100! / (100 – 10%)! = 6.28 × 1019
Using the brute force method will determine all possible “power in” vs. “power out” solutions. Among all of the solutions that match the demanded power output, the one that requires the least flow is then selected.
To give an idea of how long this would take, a test of the brute force method on three turbine curves of 150 increments each required 2.5 seconds to solve, and the number of calculations was about 3.3 million. Applying this to the 10-turbine powerhouse illustrated in Equation 2, the solution would be achieved in about 1.5 million years!
Hill climbing method for load sharing
The hill climbing method begins its calculations at full power. It then finds the best solution for every increment of decreased power down toward zero. For every increment of decreased power, this method will examine possible solutions next to the last operating point until the load sharing that consumes the least amount of water is found.
This method is much faster than the brute force method but has a couple of flaws. First, the speed of the solution is dependent on the granularity of the efficiency profiles. If a curve is given in 1-MW increments, then the solutions will be given in 1-MW increments. If the curve is given in smaller increments, the solution time naturally increases. Conversely, larger increments give less detail, and there can be some efficiency loss as a result.
Perhaps a greater flaw is that the hill climbing method uses convex efficiency vs. power curves for finding a solution.The problem with this is that there are two points of identical efficiency on every curve (not counting the peak), and this can confuse the algorithm as it attempts to find an ideal, overall efficiency.
Sequential quadratic programming optimization for load sharing
Sequential quadratic programming (SQP) uses Newton’s Method and Taylor series expansion to directly solve a set of unconstrained, quadratic curves by applying global constraints and an educated first guess. The sidebar on page 100 provides a technical explanation of SQP. A hydroelectric unit’s flow vs. power curve can be approximated with a second order or quadratic polynomial, and this simplified mathematical characteristic makes the hydro turbine an ideal candidate for what economists call cost function minimization. Non-linear elements, such as minimum and maximum power outputs, minimum run times, and rough zones can be dealt with by specifying them in the objective function.
Particle swarm optimization for unit selection
This method involves randomly picking a number of turbines to use from the total number available and then randomly picking the specific units to use. If the available power is greater than or equal to the desired power, the load sharing algorithm is run. The process repeats several times, after which the best unit selection and load sharing results are chosen.
There are no guarantees as to the accuracy of the particle swarm optimization method, and its use increases the solution time. Statistically, though, the greater the number of iterations attempted, the better the results achieved. Although some would say this method, which is based on game theory, is too impractical to be considered, results have at times outperformed the following method – lowest peak efficiency dropping – by a fraction of a percent.
In addition, using this method, the best results can be stored to improve the unit selection over time.
Lowest peak efficiency dropping for unit selection
With this method, unit selection is performed by dropping the unit with the lowest peak efficiency. Load sharing is determined and compared to the previous results. Further units are dropped in this same way until the power demand cannot be met using the available generating units, and the highest-efficiency solution is chosen from the results.
The computer algorithm takes less than a second to find an optimal solution for any given setpoint in a 10- turbine powerhouse, with an accuracy margin of 0.5%. An important point to note about unit selection is that it can work differently when implementing Type 3 optimization. When base points in a generating system that contains many powerhouses are chosen, unit selection may vary from what is ideal for a single powerhouse, to ensure a higher efficiency of the multi-powerhouse profile.
The application of this algorithm has been validated against a Type 2 Optimization study performed by Lee H. Sheldon, P.E. Sheldon has a Master of Science degree in Fluid Mechanics and Hydraulics and has worked in the hydropower industry for more than 40 years. In that time, he has field tested about 50 hydraulic turbines and provided forensic analyses for about a dozen turbine failures. Sheldon has authored about 26 technical papers in the hydropower field and coauthored a college textbook on Hydropower Engineering. He is a professor of hydropower engineering and fluid mechanics at the Oregon Institute of Technology.
Sheldon’s Type 2 optimization study was based on measured data from 14 hydraulic turbines at the 1,780-MW The Dalles Dam powerhouse on the Columbia River in Oregon. Figure 3 shows a comparison of the results calculated by Sheldon and those obtained by the optimization script for the same data set.
The overall efficiency changes across the range of power outputs as optimum load sharing and unit selection are performed for incremental setpoints between minimum and full power. At full power, efficiency drops off because all turbines are beyond their points of peak efficiency. At the left part of the curve in Figure 3, as the least efficient turbines are consecutively dropped off, there are spikes in efficiency that represent the remaining turbines running at peak efficiency. An interpolated curve represents the optimal powerhouse efficiency for every setpoint.
Comparing the results, the optimization script’s total optimum powerhouse efficiency deviated from that arrived at in Sheldon’s study by about ±0.20% of the hand calculation. The study done by Sheldon required two weeks of calculation using a lifetime of expertise. On the other hand, the optimization script’s results were accomplished in a matter of hours, including the time taken to prepare the turbine data for input into the script.
The speed of solution and the similarity of results provide confidence that the SQP method detailed in this article is both practical and accurate.
Comparing the optimal to identical load sharing and unit selection
As a demonstration of the benefit of applying a Type 2 Optimization script such as this one, the authors compare optimized operation of The Dalles Dam to a simulation of random unit selection and identical load sharing between turbines. As more power is needed for the “Identical” model, a random generating unit is activated, rather than the highest efficiency turbine available (see Figure 4). Note that this figure is for demonstration purposes and is not intended to represent how a typical hydroelectric plant operator would manage generating units.
The difference between the two simulations is considerable. At most, the “Identical” load sharing simulation lost 7.9% of the total efficiency that was available to the powerhouse. The average difference is about 5%, and this is no small matter whenever the value of the water flow to the plant owner is considered.
The biggest hurdle for Type 2 hydropower optimization is the speed of solution. If no Type 1 optimization has been performed, there is an inherent loss in powerhouse efficiency. Without accurate efficiency data, optimal unit selection and load sharing are impossible to determine quickly. Even with some knowledge about the efficiencies of individual units in a hydroelectric powerhouse, optimal load sharing is still in question.
The method discussed in this article solves the above problems, provided that Type 1 optimization has been performed. Once the results of a Type 1 optimization have been obtained, every facility can have Type 2 optimization. With this technology, project owners can expect a notable increase in the net energy produced by their powerhouses and increased equipment service life as well.
A Bonneville Power Administration grant is currently under way for development of this algorithm as a stand-alone and/or integrated powerhouse optimization software. Opportunities are available to participate in this development in exchange for a free, customized implementation of the algorithm.
Further Reading on Hydropower Optimization
Sheldon, Lee H., “Reviewing the Ap-proaches to Hydro Optimization,” Hydro Review, Volume 17, No. 3, June 1988, pages 60-67.
Sheldon, Lee H., “Optimizing the Generating Efficiency of Entire Powerhouses,” Proceedings of HydroVision International 2008, PennWell, Tulsa, Okla.
Mariano, Coimbra, Calada and Ferriera, “Powerhouse I/O Curves Considering Head Dependency,” Proceedings of Second International Conference on Power Engineering, Energy an Electrical Drives, www.uninova.pt/powereng2009.
The sidebar on page 100 provides a technical explanation of sequential quadratic programming, an optimization method used on a quadratic function of several variables subject to linear constraints.
Technical Explanation of Sequential Quadratic Programming
To reiterate, Type 2 optimization is essentially the problem of minimizing water flow through the turbine units of a powerhouse for a given power setpoint. In terms of flow and power, individual units are characterized by sets of nonlinear curves, depending on the head across the unit. If head is assumed to be constant, the constraints for this minimization problem reduce to: the total power demanded and the upper and lower power limits of each unit. Because this is basically a constrained, nonlinear optimization problem with multiple variables, the use of quadratic programming (QP) is appropriate. Quadratic programming is an optimization method used on a quadratic function of several variables subject to linear constraints on those variables.
By using QP, the constraints can be brought into the objective function (the function that is to be maximized or minimized), which describes the power vs. flow curves. This results in a Lagrangian function, L(p, λn), which is a function used in maximization and minimization optimization problems to represent a vector normal to a gradient function.
L(p, λn) = q(p) – λngn(p)
– p is a vector of the individual units’ power setpoints;
– gn(p) are the n constraints as a function of the power setpoint; and
– λn is a scalar multiplier known as the Langrangian Multiplier of the nth constraint. This is the multiplier applied between the gradient function and the constraint function to set the magnitudes equal.
The objective function, q(p), is the flow to be minimized. The partial derivative of the Lagrangian then gives the conditions under which the optimum can be found. These conditions are typically known as the Karush-Kuhn-Tucker (KKT) conditions, named after the major contributors to their derivation. Stated in terms of L(p, λn), the KKT conditions for Type 2 optimization would be:
∇L(p) – λngn(p) = 0
λngn(p) = 0
∇gn(p) < 0
Furthermore, subject to the KKT conditions, a local minimum is found by applying a Quasi-Newton method. Newton’s method is a numerical method using the derivative of a complex function and the point where the slope passes through the origin to find successively better approximations of the root of the function. The Quasi-Newton method begins with a second order Taylor series expansion – a linear mathematical series based on the slope of a curve around an operating point that is used to approximate the output of the function at that point – which gives a local, quadratic approximation of L(p, λn):
L(pi+1) ≈ L(pi) + ∇L(pi) (pi+1 – pi)T + ½ (pi+1 – pi)T H(pi+1 – pi)
– H is the Hessian, L”(pi-1), where pi-1 is the power setpoint at iteration i−1.
Taking the derivative of this local approximation with respect to pi and setting it equal to zero gives:
∇L(pi) ≈ ∇L(pi-1) + H(pi-pi-1) =0
Solving for pi gives:
pi = pi-1 – H-1∇L(pi-1)
If pi-1 = pi, then the extremum has been found. However, if pi-1 ≠ pi, then the iteration must be continued until a minimum solution is found. Furthermore, the term H-1∇L(pi-1) is a vector that describes a segment of a path from the current operating point pi to the minimum, which ensures the step lengths between operating points are chosen in the direction of a minimum at each iteration. However, to calculate the full Hessian for each iteration is unnecessary and computationally costly.
In the Quasi-Newton method, an approximate Hessian is used and updated upon each iteration. There are several update methods available, but the update method of Broyden-Fletcher-Goldfarb-Shanno (BFGS) has proven reliable and is used in this article’s algorithm:
Hi + [L(pi) – Hi(pi+1 – pi)] / [(pi+1 – pi)T (pi+1-pi)] x (pi+1 – pi)T
This kind of successive updating and solving of QP problems is collectively known as sequential quadratic programming (SQP).
In summary, SQP solves constrained, nonlinear optimization problems by iteratively solving for a zero gradient of the objective function to be optimized. At each iteration, a check is performed to ensure the constraint conditions have not been violated, an updated approximate Hessian is calculated, and this is used to determine the step length and the direction of the stepped changes to the operating point upon the next iteration.
Further Reading on Sequential Quadratic Programming
Pantoja, J.F.O. De O. and D.Q. Mayne, “A Sequential Quadratic Programming Algorithm for Discrete Optimal Control Problems with Control Inequality Constraints,” Federal University of Maranh Ho, Maranh Ho, Brazil, and Imperial College, London.
Finardi, Erlon Cristian and Edson Luiz da Silva, “Solving the Hydro Unit Commitment Problem via Dual Decomposition and Sequential Quadratic Programming,” IEEE Transactions on Power Systems, Volume 21., No. 2, May 2006.
This article has been evaluated and edited in accordance with reviews conducted by two or more professionals who have relevant expertise. These peer reviewers judge manuscripts for technical accuracy, usefulness, and overall importance within the hydroelectric industry.
Michael Curtiss is a distribution engineer with Pacific Power. Jonas Parker is control systems integration engineer with Portland Engineering Inc. Parker Scoggins is an engineer with Power Engineers. The authors completed the research described in this article when all three were students at the Oregon Institute of Technology.
More HR Archives Issue Articles