Refining PSO Applied to Electric Energy Cost Reduction in Water Pumping - page 2

Optimization Using PSO

The bio-inspired algorithms are one of the techniques that have been gaining strength in the last few years. This large group of algorithms includes the PSO developed by Eberhart and Kennedy in 1995 and improved by Eberhart and Shi, who added the inertia constant (Eberhart and Kennedy, 1995) and (Eberhart and Shi, 2001). Since then, this algorithm has been widely used in continuous optimization problems of many variables or in problems using combination analysis and a discrete/continuous combination.

PSO is an algorithm based on a population, where particles are the elementary unit. The particles are composed of two vectors of size D (dimension of the problem), one that represents the particle position and the other that represents the displacement velocity. At each iteration n, the particle is updated by renewing the position information and velocity, which will subsequently be described (Eberhart and Kennedy, 1995).

The first step of the method is particle initiation for both the position and initial velocity, which is randomly performed within an interval of interest. The particles search for optimal points of the problem and update their velocities until one of the stop criteria for the problem is met, such as the maximum value with random error, maximum number of iterations, the absence of improvement in the object function for a given iteration interval; there are other stop criteria widely used in various numerical problems (Faires and Burden, 2002).

Considering that the problem contains i particles, the position of particle Xi in the swarm is described by a vector with D coordinates, where Xi = (xi1,xi2,xi3,...xiD).

This particle's velocity can also be described by a vector with D positions, where each vector component Vi represents the velocity of particle i in the D coordinate, i.e., Vi = (vi1, vi2,vi3...viD).

The particles compare their positions among themselves and "remember" their previous positions stored in their "memory". After evaluating the best solution, the method allows the particles far from the solution to move closer to the best solution. During the comparison, the best position of particle i is stored in a vector called lbest (best local value), described as Pi = (pi1,pi2,pi3...piD), and the best solution of the swarm is stored in a vector called gbest (best global value).

The swarm's behaviour can be described by the following equations:


where d = 1,2,...D, n = 1,2,... N, and N is the number of iterations. Additionally, r1 and r2 are randomly chosen numbers within the interval [0,1], and n represents the current iteration.

From equation (11), it can be observed that for each iteration the particle position is updated. Part of this update is marked by a coefficient that has information on the best positions experienced by the particle, which is called the cognitive coefficient (c1). Another aspect of the update is influenced by the information on the best positions experienced by the group, and such information is computed at a velocity by a coefficient called the social coefficient (c2). Finally, the velocity is also updated by a coefficient named weight or the inertia coefficient (w). (Eberhart and Kennedy, 1995)

In the continuous problem, the particle "flies over" the entire search space. In the case of a binary discrete space, the particle scans the vertices of a D-dimensional hypercube and searches for the best solution vertex. In comparison to the algorithms for continuous spaces, where the flying-over velocity is easily interpreted as the particle displacement rate from a position xid to a position xi+1,d, which can be easily determined by equation (7), the interpretation of velocity in the binary case is more complex. For this case, the velocity is probabilistic and can be understood as the chance of position xid being 1. (Eberhart and Kennedy, 1997)

To transform the velocity into a position, a sigmoidal function is typically used, according to the following relation:


where rand() is a function that generates a random number in the interval [0,1], and S(vid) is the sigmoidal function applied to the velocity; the transformation between the value of rand() in the interval [0,1] can thus be controlled. Therefore, depending on velocity, the probability is responsible for updating the particle's position (Eberhart and Kennedy, 1997) PSO and most of the bio-inspired methods are unrestricted search methods, and hence, these methods do not have treatment mechanisms for the restrictions inside their search routines. Therefore, a way to transform a restricted problem into an unrestricted problem for the applicability of one of the unrestricted algorithms is required (Pillo and Grippo, 1989). One common, widely used form in restricted engineering problems is using a penalty function, which adds a certain value from the object function, whose role is to treat deviations of variables or associated parameters relative to the pre-defined restrictions (Yeniay, 2005).

As the restricted problem treatment in this study, a penalty method for pressures named "Fictitious Machine Method" was used with the following formulation: let pref be the necessary pressure in a reference node, in water column meters, and Qtub be the flow rate in the pipes, in m³/s, which arrives at this node. It is possible to calculate the power of a fictitious hydraulic machine, which regularizes the pressure at this node, and works as a pump when the pressure at the reference node is lower than the desired pressure or works as a turbine when the pressure at the reference node is higher than the desired pressure. By multiplying the required power by each time interval and by the energy cost at this time interval, the cost of establishing the necessary pressure in the observed node is obtained. The power of the fictitious hydraulic machine can be calculated with the following equation:

For14     (14)

where γ is the specific weight of the fluid transported by the studied system, and P is the pressure at the node.

Hence, the cost associated with the fictitious machine and the consequent penalty of the method can be written as

For15     (15)

where c is the energy cost, f is the surcharge factor, and k is a factor related to the convergence characteristics of the method, which is used as a scale factor.

For the penalties of the velocities and stopping/start-up pump operations, the classical development of penalty functions was used, which was presented by Parsopoulos and Vrahatis (2002) and can be written as

For16     (16)

where λ is a scale multiplying factor, |x-x'| is the modulus of the total deviation between the limit value x' and the variable x, and t is an exponent that defines the behaviour of the penalty function.