An Analysis on Various Application and Techniques of Computational Intelligence In Appearing Electric Power Systems
Exploring the Application of Computational Intelligence in Power System Issues
by Satyapriya Satapathy*,
- Published in International Journal of Information Technology and Management, E-ISSN: 2249-4510
Volume 4, Issue No. 1, Feb 2013, Pages 0 - 0 (0)
Published by: Ignited Minds Journals
ABSTRACT
Electric power systems, far and wide, are changing as faras structure, operation, management and ownership because of specialized,financial and ideological reasons. Power system continues stretching as far asgeographical areas, resources increments, what's more entrance of new advancesin generation, transmission and distribution. This makes the electric powersystem complex, vigorously focused on and consequently defenseless againstcourse blackouts. The traditional methods in settling the power system outline,planning, operation and control issues have been widely utilized for distinctiveapplications yet these methods endure from a few troubles because ofnecessities of subordinate presence, giving problematic arrangements, and soforth. Computation intelligent (CI) methods can give better arrangement in afew conditions and are by and large broadly connected in the electricalengineering applications. This paper highlights the application ofcomputational brainpower methods in power system issues. Different sorts of CImethods, which are generally utilized as a part of power system, are additionallytalked about in the concise.
KEYWORD
electric power systems, computational intelligence, structure, operation, management, ownership, geographical areas, resources, generation, transmission, distribution, course blackouts, traditional methods, power system outline, planning, control issues, computation intelligent methods, electrical engineering, application, techniques
INTRODUCTION
Expanded interconnection and stacking of the power system alongside deregulation and environmental concerns has brought new challenges for electric power system operation, control and automation. In changed electricity market, the operation and control of power system get to be complex because of complexity in modeling and vulnerabilities. Power system models utilized for intelligent operation and control are exceptionally subject to the undertaking reason. In focused electricity market alongside automation, computational intelligent techniques are extremely helpful. As electric utilities are attempting to give savvy arrangements sparing, specialized (secure, steady and great power quality) and environmental objectives, there are a few testing issues in the savvy network arrangements, for example, yet not constrained to, determining of burden, value, subordinate administrations; entrance of new and renewable energy sources; offering methodologies of participants; power system planning & control; working choices under missing data; expanded disseminated generations and interest reaction in the electric market; tuning of controller parameters in differing working conditions, and so on. Risk management and financial management in electric area are concerned with discovering a perfect exchange off between boosting the normal returns and minimizing the risks connected with these ventures. Computational knowledge (CI) is another and advanced device for taking care of complex issues which are hard to be settled by the customary techniques. Heuristic optimization techniques are broadly useful methods that are exceptionally flexible and can be connected to numerous sorts of target functions and constraints. As of late, these new heuristic apparatuses have been joined among themselves also new methods have developed that consolidate components of nature-based methods or which have their establishment in stochastic and recreation methods. Creating arrangements with these apparatuses offers two noteworthy advantages: improvement time is much shorter than at the point when utilizing more conventional approaches, and the systems are exceptionally powerful, being moderately unfeeling to uproarious and/or missing information/data known as instability. Because of environmental, right-of-way and expense issues, there is an expanded enthusiasm toward better use of accessible power system capacities in both packaged and unbundled power systems. Examples of generation that brings about overwhelming streams, have
Available online at www.ignited.in Page 2
a tendency to cause more prominent misfortunes, and to debilitate dependability and security, at last make certain generation designs financially undesirable. Consequently, new gadgets and resrources, for example, flexible ac transmission systems (FACTS), disseminated generations, keen network innovations, and so on are consistently used. In the developing zone of power systems, computation knowledge assumes a key part in giving better arrangements of the current and new issues. This paper records different potential areas of power systems and gives the parts of computational discernment in the developing power systems. A concise survey of computational techniques is additionally displayed.
Possible Area of Analysis in Power System Employing Computational Intelligence
There are several problems in the power systems which cannot be solved using the conventional approaches as these methods are based on several requirements which may not be true all the time. In those situations, computational intelligence techniques are only choice however these techniques are not limited to these applications. The following areas of power system utilize the application of computational intelligence. • Power system operation (including unit commitment, economic dispatch, hydro-thermal coordination, maintenance scheduling, congestion management, load/power flow, state estimation, etc.) • Power system planning (including generation expansion planning, transmission expansion planning, reactive power planning, power system reliability, etc.) • Power system control (such as voltage control, load frequency control, stability control, power flow control, dynamic security assessment, etc.) • Power plant control (including thermal power plant control, fuel cell power plant control, etc.) • Network control (location and sizing of facts devices, control of facts devices, etc.) • Electricity markets (including bidding strategies, market analysis and clearing, etc.) • Power system automation (such as restoration and management, fault diagnosis and reliability, network security, etc.) • Distribution system application (such as operation and planning of distribution system, demand side management & demand response, network reconfiguration, operation and control of smart grid, etc.) • Distributed generation application (such as distributed generation planning, operation with distributed generation, wind turbine plant control, solar photovoltaic power plant control, renewable energy sources, etc.) • Forecasting application (such as short term load forecasting, electricity market forecasting, long term load forecasting, wind power forecasting, solar power forecasting, etc.)
DIFFERENT COMPUTATIONAL INTELLIGENCE TECHNIQUES
Computational intelligence (CI) methods, which guarantee a worldwide ideal or almost thus, for example, master system (ES), artificial neural network (ANN), genetic algorithm (GA), evolutionary computation (EC), fuzzy logic, and so forth have been risen in later a long time in power systems applications as compelling apparatuses. These methods are otherwise called artificial intelligence (AI) in a few meets expectations. In a practical power system, it is important to have the human information and encounters over a time of time because of different vulnerabilities, load varieties, topology changes, and so on. This segment displays the review of CI/AI methods (ANN, GA, fuzzy systems, EC, ES, ant colony search, Tabu search, and so on.) utilized as a part of power system applications.
Artificial Neural Networks -
An artificial neural network (ANN) is a data handling ideal model that is enlivened by the biological sensory systems, for example, the cerebrum, process data (Bishop, 1995). The key component of this ideal model is the novel structure of the data handling system made out of a substantial number of exceptionally interconnected preparing components (neurons) working as one to tackle the particular issues. Anns, in the same way as individuals, learn by case. The beginning stage of ANN application was the preparation algorithm proposed and showed, by Hebb in 1949, how a network of neurons could show learning conduct. Amid the preparation stage, the neurons are subjected to a situated of limited cases called preparing sets, and the neurons then alter their weights according to certain learning principle. Anns are not customized in the routine sense, rather they figure out how to tackle the issue through interconnections with environment. Almost no computation is done at the site of individual hub (neuron). There is no express memory or preparing areas in neural network however are certain in the associations
Available online at www.ignited.in Page 3
between hubs. Not all sources of info encouraging a hub are of equivalent imperativeness. Everything relies on upon weight which can be negative or positive. Inputs landing at a hub are changed according to the activation capacity of hub. In spite of the fact that the neural network (NN) preparing is by and large computationally costly, it takes irrelevant time to assess right result once the network has been prepared. Notwithstanding the advantages, a few disadvantages of the ANN are: (i) huge dimensionality, (ii) choice of the ideal setup, (iii) decision of preparing methodology, (iv) the 'black-box' representation of ANN – they lack clarification abilities and (v) the fact that comes about are constantly produced regardless of the possibility that the data information are preposterous. An alternate drawback of neural network systems is that they are not adaptable i.e. once an ANN is prepared to do certain errand, it is hard to stretch out for different errands without retraining the NN. Artificial neural networks are most guaranteeing strategy for some power system issues and have been utilized for a few applications. Anns are predominantly classified by their construction modeling (number of layers), topology (network example, encourage forward or repetitive, and so forth.) and learning administration. In light of the construction modeling ANN model may be single-layer ANN which incorporates perceptron model furthermore ADALINE. ANN model can be further classified as Feed forward NN and Feed Backward NN focused around neuron interactions. Learning of ANN may be Supervised Learning, Unsupervised Learning, and Reinforcement Learning. Taking into account neuron structure ANN model may be delegated multilayer perceptron model, Boltzman machine, cauchy machine, Kohonen masterminding toward oneself maps, bidirectional acquainted memories, versatile reverberation hypothesis I (ART-1), versatile reverberation hypothesis 2 (ART-2), counter proliferation ANN. Some other extraordinary ANN models are parallel self-various leveled NN, repetitive NN, spiral premise capacity NN, information based NN, mixture NN, wavelet NN, cell NN, quantum NN, dynamic NN, and so on.
Genetic Algorithms -
Genetic algorithm (GA) is an optimization strategy focused around the mechanics of characteristic choice and common genetics. Its basic guideline is the fittest part of population has the most elevated likelihood for survival. The most well-known customary optimization techniques fall under two classes viz. calculus based system and enumerative plans. In spite of the fact that generally created, these techniques have significant drawbacks. Calculus built optimization by and large depends in light of coherence presumptions and presence of subsidiaries. Enumerative techniques depend on uncommon merging properties and helper capacity assessment. The genetic algorithm, then again, meets expectations just with target capacity data in a search for an ideal parameter set. The GA can be recognized from other optimization methods by emulating four characteristics. (i) The GA takes a shot at coding of the parameters set as opposed to the actual parameters. (ii) The GA searches for ideal focuses utilizing a population of conceivable arrangement focuses, not a solitary point. This is an important characteristic which makes GA all the more powerful furthermore comes about into implied parallelism. (iii) The GA utilizes just target capacity data. No other helper data (e.g. subordinates, and so forth.) is needed. (iv) The GA utilizes likelihood move guidelines, and not the deterministic standards. Fuzzy logic (FL) was created by Zadeh in 1964 to address vulnerability and imprecision, which broadly exist in the engineering issues. FL was initially presented in 1979 for taking care of power system issues. Fuzzy set hypothesis can be considered as an issue of the traditional set hypothesis. In established set hypothesis, a component of the universe either fits in with or does not fit in with the set. Along these lines, the level of relationship of a component is fresh. In a fuzzy set hypothesis, the relationship of a component can be constantly shifting. Mathematically, a fuzzy set is a mapping (known as enrollment capacity) from the universe of talk to the shut interim [0, 1]. Membership function is the measure of degree of similarity of any element in the universe of discourse to a fuzzy subset. Triangular, trapezoidal, piecewise-linear and Gaussian functions are most commonly used membership functions. The membership function is usually designed by taking into consideration the requirement and constraints of the problem. Fuzzy logic implements human experiences and preferences via membership functions and fuzzy rules. Due to the use of fuzzy variables, the system can be made understandable to a non-expert operator. In this way, fuzzy logic can be used as a general methodology to incorporate knowledge, heuristics or theory into controllers and decision makers. The advantages of fuzzy theory are as follows: (i) It more accurately represents the operational
Available online at www.ignited.in Page 4
constraints of power systems and (ii) Fuzzified constraints are softer than traditional constraints. Evolutionary Computation: Evolutionary Strategies and Evolutionary Programming - Natural evolution is a hypothetical population-based optimization process. Simulating this process on a computer results in stochastic optimization techniques that can often perform better than classical methods of optimization for real-world problems. Evolutionary computation (EC) is based on the Darwin’s principle of ‘survival of the fittest strategy’. An evolutionary algorithm begins by initializing a population of solutions to a problem. New solutions are then created by randomly varying those of the initial population. All solutions are measured with respect to how well they address the task. Finally, a selection criterion is applied to weed out those solutions, which are below standard. The process is iterated using the selected set of solutions until a specific criterion is met. The advantages of EC are adaptability to change and ability to generate good enough solutions but it needs to be understood in relation to computing requirements and convergence properties. EC can be subdivided into GA, evolution strategies, evolutionary programming (EP), genetic programming, classified systems, simulated annealing (SA), etc. The first work in the field of evolutionary computation was reported by Fraser in 1957 (Fraser, 1957) to study the aspects of genetic system using a computer. After some time, a number of evolutionary inspired optimization techniques were developed. Evolution strategies (ES) employ real-coded variables and, in its original form, it relied on mutation as the search operator and a population size of one. Since then, it has evolved to share many features with GA. The major similarity between these two types of algorithms is that they both maintain populations of potential solutions and use a selection mechanism for choosing the best individuals from the population. The main differences are: • ESs operates directly on floating point vectors while classical GAs operate on binary strings, • GAs rely mainly on recombination to explore the search space while ES uses mutation as the dominant operator and • ES is an abstraction of evolution at individual behavior level, stressing the behavioral link between an individual and its offspring, while GA maintains the genetic link. Evolutionary programming (EP), which is a stochastic optimization strategy similar to GA, places emphasis on the behavioral linkage between parents and their offspring, rather than seeking to emulate specific genetic operators as observed in nature. EP is similar to evolutionary strategies, although the two approaches were developed independently. Like ES and GA, EP is a useful method of optimization when other techniques such as gradient descent or direct analytical discovery are not possible. Combinatorial and real-valued function optimizations, in which the optimization surface or fitness landscape is “rugged” and possessing many locally optimal solutions, are well suited for evolutionary programming.
SIMULATED ANNEALING
This method was independently described by Scott Kirkpatrick, C. Daniel Gelatt and Mario P. Vecchi in 1983, and by Vlado CČerny in 1985. Based on the annealing process in the statistical mechanics, the simulated annealing (SA) was introduced for solving complicated combinatorial optimization problems. In a large combinatorial optimization problem, an appropriate perturbation mechanism, cost function, solution space, and cooling schedule are required in order to find an optimal solution with simulated annealing. SA is effective in network reconfiguration problems for large-scale distribution systems and its search capability becomes more significant as the system size increases. Moreover, the cost function with a smoothing strategy enables the SA to escape more easily from local minima and to reach rapidly to the vicinity of an optimal solution. The advantages of SA are its general applicability to deal with arbitrary systems and cost functions; its ability to refine optimal solution; and its simplicity of implementation even for complex problems. The major drawback of SA is repeated annealing. This method cannot tell whether it has found optimal solution or not. Some other methods (e.g. branch and bound) are required to do this. SA has been used in various power system applications like transmission expansion planning, unit commitment, maintenance scheduling, etc. Expert Systems - AI programs that achieve master level capability in taking care of the issues by realizing information particular errands are called information based or master systems (ES) which was initially proposed by Feigenbaum et al. in the early 1970s. ES is an information based or principle based system, which utilizes the learning and interface technique to tackle issues that are sufficiently troublesome to be explained by human aptitude. Principle advantages
Available online at www.ignited.in Page 5
of ES are: (a) It is perpetual and predictable (b) It can be effectively exchanged or repeated (c) It can be effectively reported. Principle disadvantage of ES is that it experiences a learning bottleneck by having powerlessness to learn or adjust to new circumstances. The learning engineering techniques began with basic guideline based system and reached out to more exceptional techniques such as article situated outline, qualitative reasoning, check and approval methods, regular dialects, and multi-agent systems. For the past a few years, a lot of ES applications has been produced to plan arrangement, investigate, oversee, control and work different parts of power generation, transmission and distributions systems. Master system has additionally been connected in later a long time for burden, offer and value determining.
Ant Colony and Tabu Search -
Dorigo introduced the ant colony search (ACS) system, first time, in 1992 (Dorigo, 1992). ACS techniques take inspiration from the behavior of real ant colonies and are used to solve functional or combinational problems. ACS algorithms to some extent mimic the behavior of real ants. The main characteristics of ACS are positive feedback for recovery of good solutions, distributed computation, which avoids premature convergence, and the use of a constructive heuristic to find acceptable solutions in the early stages of the search process. The main drawback of the ACS technique is poor computational features. ACS technique has been mainly used in finding the shortest route for transmission network. Tabu search (TS) is basically a gradient-descent search with memory. The memory preserves a number of previously visited states along with a number of states that might be considered unwanted. This information is stored in a Tabu list. The definition of a state, the area around it and the length of the Tabu list are critical design parameters. In addition to these Tabu parameters, two extra parameters are often used such as aspiration and diversification. Aspiration is used when all the neighboring states of the current state are also included in the Tabu list. In that case, the Tabu obstacle is overridden by selecting a new state. Diversification adds randomness to this otherwise deterministic search. If the Tabu search is not converging, the search is reset randomly. TS is an iterative improvement procedure that starts from some initial solution and attempts to determine a better solution in the manner of a ‘greatest descent neighborhood’ search algorithm. Basic components of TS are the moves, Tabu list and aspiration level. TS is a metahuristic search to solve global optimization problem, based on multi-level memory management and response exploration. TS has been used in various power system application like transmission planning, optimal capacitor placement, unit commitment, hydrothermal scheduling , fault diagnosis/alarm processing, reactive power planning, etc.
Particle Swarm Optimization -
The particle swarm optimization (PSO) method introduced by Kennedy and Eberhart is a self-educating optimisation algorithm that can be applied to any nonlinear optimisation problem. In PSO, the potential solutions, called particles, fly through the problem space by following the best fitness of the particles. It is easily implemented in most programming languages and has proven to be both very fast and effective when applied to a diverse set of optimization problems. In PSO, the particles are “flown” through the problem space by following the current optimum particles. Each particle keeps the track of its coordinate in the problem space, which is associated with the best solution (fitness) that it has achieved so far. This implies that each particle has memory, which allows it to remember the best position on the feasible search space that has ever visited. This value is commonly called as pbest . Another best value that is tracked by the particle swarm optimizer is the best value obtained so far by any particle in the neighborhood of the particle. This location is commonly called as gbest.
Support Vector Machines -
Support vector machine (SVM) is one of the relatively new and promising methods for learning, separating functions in pattern recognition (classification) tasks as well as performing function estimation in regression problems. It is originated from supervised learning systems derived from the statistical learning theory introduced by Vapnik for “distribution free learning from data” . In this method, the data are mapped into a high dimensional space via a nonlinear map, and using this high dimensional space, an optimal separating hyper-plane or linear regression function is constructed. This process involves a quadratic programming problem and produces a global optimal solution. The great advantage of SVM approach is that it greatly reduces the number of operations in the learning mode and minimizes the generalization error on the test
Available online at www.ignited.in Page 6
set under the structuralrisk minimization (SRM) principle. Main objective of the SRM principle is to choose the model complexity optimally for a given training sample. The input space in a SVM is nonlinearly mapped onto a high dimensional feature space. The idea is to map the feature space into a much bigger space so that the boundary is linear in the new space. SVMs are able to find non-linear boundaries if classes are linearly non-separable. Another important feature of the SVM is the use of kernels. Kernel representations offer an alternative solution by nonlinearly projecting the input space onto a high dimensional feature space. The advantage of using SVMs for classification is the generalization performance. SVM performs better than neural networks in term of generalization. There is problem of over fitting or under fitting if so many training data or too few training data are used. The computational complexity is other factor for the choice of SVMs as classifier. The other advantage of SVM based system is that it is straight forward to extend the system when new types of cases are added to the classifier.
CONCLUSION
The current enthusiasm toward utilizing the computation intelligence (CI) for power system applications is expanding among the researchers and academicians. It is perceived that gigantic measure of research papers and articles are accessible in all the territory of engineering and science. There is no particular rule for the application of CI techniques in the power systems. In this paper, different computational techniques generally utilized as a part of power system applications are quickly depicted. The potential areas of CI application are likewise highlighted. New intelligent system advances utilizing computerized sign preparing techniques, master systems, artificial intelligent and machine learning give a few novel advantages in tackling the power system issues.
REFERENCES
- Bishop C.M., 2000, Neural networks for pattern recognition, Oxford University Press, Oxford. El-Hawary, Mohamed E., 2004, Electric power applications of fuzzy systems, John Wiley USA.
- Kennedy J., and Eberhart R., 2002, Particle swarm optimization, Proc. of the IEEE Int. Conf. on Neural Networks, Piscataway, NJ, pp. 1942–1948.
- Momoh James A., EL-Hawary Mohamed E., 2000, Electric systems, dynamics, and stability with artificial intelligence, Marcel Dekker, Inc. USA.
- Sobajic Dejan J., 1993, Neural network computing
for the electric power industry, Routledge Publisher, USA.
- Vapnik N., 1998, Statistical learning theory, John Wiley and Sons, New York.
- Warwick K., Ekwue Arthur, Aggarwal Raj2006, Artificial intelligence techniques in power systems, IEE UK.
Wehenkel Louis A., 1998, Automatic learning techniques in power systems, Kluwer academic publisher, USA.