Study of Data Clustering Techniques Based on Various Artificial Neural Network and Particle Swarm Optimization
Improving Fault Detection and Isolation in Process Plant Monitoring
by Satish Kumar Malik*,
- Published in International Journal of Information Technology and Management, E-ISSN: 2249-4510
Volume 1, Issue No. 1, Aug 2011, Pages 0 - 0 (0)
Published by: Ignited Minds Journals
ABSTRACT
This paper proposes different conventionaland fuzzy based clustering techniques for fault detection and isolation inprocess plant monitoring. Process plant monitoring is very important aspect toimprove productiveness and efficiency of the product and plant. This papertakes a case study of plant data and implements SOM based training methods tocluster the data and detect and isolate the faults. It also discusses a newhybrid clustering algorithm implementing fuzzy C means algorithm and PSO.
KEYWORD
data clustering techniques, artificial neural network, particle swarm optimization, fault detection, isolation, process plant monitoring, productiveness, efficiency, case study, SOM, training methods, hybrid clustering algorithm, fuzzy C means algorithm, PSO
INTRODUCTION
Clustering is a typical unsupervised learning technique for grouping similar data points [4, 8, 53]. A clustering algorithm assigns a large number of data points to a smaller number of groups such that data points in the same group share the same properties while, in different groups, they are dissimilar. Clustering has many applications, including part family formation for group technology, image segmentation, information retrieval, web pages grouping, market segmentation, and scientific and engineering analysis. One of the best known and most popular clustering algorithms is the k-means algorithm [53]. The algorithm is efficient at clustering large data sets because its computational complexity only grows linearly with the number of data points. However, the algorithm may converge to solutions that are not optimal. Many clustering methods have been proposed. They can be broadly classified into four categories: partitioning methods, hierarchical methods, density-based methods and grid-based methods. Other clustering techniques that do not fit in these categories have also been developed. These are fuzzy clustering, artificial neural networks and genetic algorithms.
INDUSTRIAL PROCESS MONITORING, FAULT DETECTION AND ISOLATION
Monitoring is a continuous real-time task of determining the conditions of a physical system, by recording information, recognizing and indicating anomalies in the behavior. In other words, the purpose of the monitoring is to indicate whether a process has deviated from its acceptable state, and if it has, why [25, 26]. The deviations are called process faults. Observation of the faults is known as fault detection, which is followed by fault isolation, determination of the location and the type of the fault. Fault Detection and Isolation (FDI) – also known by a common name fault diagnosis – can be carried out in many ways. The three logical parts of any FDI scheme are namely detection, decision and isolation, may be partially integrated. Fault detection takes as input the current values of the process measurements and produces one or more fault indicator signals, which are often called residuals. After the detection phase there is an inference mechanism which takes the fault indicator(s) as input and decides whether a fault has occurred or not. Detection of a fault is followed by an isolation phase which carries out identification of the fault. Fault detection methods are divided into two categories: first principles process models and models of process data. In the former approach, physical structure and a priori known relationships between variables of a process form the basis for the construction of the model and observed data are not required. In the latter case,
Available online at www.ignited.in Page 2
the structure of the model is generic or depends on the data and the model is based on observed data produced by the sensors of the process. However, in practice the line between these two categories is not sharp: measurement data may be used in construction of a first principles model and, correspondingly, a priori knowledge of a process can be used in the construction of a model using process data. Use of data-driven models instead of the first principles models is justified if construction of an accurate first principles model is (1) Impossible due to unknown process phenomena (2) Computationally infeasible because of complexity of the process. In addition, if the modeling using first principles would be possible but laborious, use of a data-based model may still be reasonable choice if the process is modified often: a model based on data is typically easier to update than a first principles model.
SELF-ORGANIZING MAP
The Self-Organizing Map (also known as Kohonen map) is an unsupervised artificial neural network which is a powerful method for clustering and visualization of high dimensional data [1]. The SOM algorithm implements a nonlinear topology preserving mapping from a high-dimensional input data space onto a low dimension discrete space (usually 1D,2D or 3D), called the topological map. A map consists of m neurons (or units) located on a regular low dimensional grid, usually a two-dimensional rectangular or hexagonal grid, that defines their neighborhood relationships. Each neuron C is represented by a weight vector Wc = [w1Λwd] where d is the dimension of the input vector.
TRAINING OF THE SOM
During training procedure, the weight vectors are adapted in such a way that close observations in the input space would activate two close neurons of the SOM. The SOM is trained iteratively. At each training step, a sample input data vectors X is randomly presented from the training data sets, and the distance between the data and all the weight vectors of the SOM is calculated. The neuron whose weight vector is close to the input vector is called the best-matching unit, often denoted bmu:
(1)
where wbmu is the best-matching unit weight vector. After finding the bmu, the weight vectors of the SOM are updated. The weight vectors of the bmu and its topological neighbours are moved closer to the input data vector. The weight-updating rule of the unit i is:
(2)
where τ is time, is a learning rate and hbmu(i,τ) is defined as the neighbourhood kernel function around the bmu. Usually, is a decreasing function of time and should be between 0 and 1. The Gaussian neighbourhood function is chosen
(3)
σ (τ) is the neighborhood radius
(4)
and f are the initial and final neighborhood radius, T is the training length, and are position of neurons k et j on the map. Fig. 1: Self Organizing Map
LABELLING OF THE SOM
After the training phase, it is possible to use the SOM to construct a classifier in which each neuron represents one class type. The classifier can then assign to each data vectors the corresponding bmu cluster. However, training of the self-organizing map is totally unsupervised. Therefore, it is not known what kind of data each of the obtained units represents. If labelled data are available, this information can be used to assign each neuron a label. The SOM is labelled based
Available online at www.ignited.in Page 3
on votes between the labels according input data vectors and only uses the one which is most frequent. Finally, class label of each original data vector is the label of the corresponding bmu.
SOM BASED CLUSTERING OF DATA
The basic idea is that the SOM is trained using data where all the values are present. As the on- and off-line measurements are interrelated, the SOM learns the association between them where after it can be used to predict the off-line values (or to classify) given only the on-line measurements [20]. This construction is called the “supervised SOM” Fig. 2: Use of SOM in process monitoring-I Fig. 3: Use of SOM in process monitoring-II Fig. Fig. 4: Analysis, computation and monitoring of process using SOM The SOM can efficiently be used for visualization of dependencies and interrelations between variables of high-dimensional data in various ways [5]. In this thesis, measurement data acquired from industrial processes are used as input data for SOMs. Careful preparation of the data is always necessary in order to obtain satisfactory results using any pattern recognition algorithm, and the SOM is not an exception. In both cases, a data set is first used to train a map. The vertical direction denotes analysis of the data set the map was trained with; it can be further divided into visualization and clustering. Component plane representation displays the values of each model vector element, i.e. each variable, on the map grid. Fig. 5: Clustering of data using SOM Fig. 6: Block diagram of use of SOM in process monitoring Fig. 7: SOM based illustrative clustering method
Available online at www.ignited.in Page 4
When not enough labelled data is available, the previous approach did not work at all. Then, to facilitate analysis of the map and the data, similar units need to be grouped to reduce the number of clusters. This is due to the topological ordering of the unit maps. Several methods are often used to perform this task. We have chosen to apply agglomerative hierarchical algorithms using the Ward’s method to cluster our maps. The clustered map can then be labeled. The primary benefit of this approach is to use more labeled data to assign each cluster a label and facilitate the analysis of revealed groups. The first step in the analysis of the map is visual inspection. In the following, the basic visualization of the SOM is introduced. The U-matrix shows distances between neighboring map units using grey levels. Dark gray represents long distances and light grey short ones. It is easy to see that the map unit in the top right corner is a very clear cluster. The SOM do not utilise class information during the training phase. Class labels can be displayed an empty grid as a post-process after the completion of training. From the labels it can be seen that unlabeled units indicate cluster borders and the map unit in the top right corner corresponds to the normal operating condition. The two other operating conditions form the other cluster. A principle component projection is made for the data, and applied to the map. Neighboring map units are joining with lines to show the SOM topology. Labels associated with map units are also shown. Also, visualization of the SOM shows that it’s impossible to isolate clearly the classification boundary in the faulty rotor case. However, increased of faulty level can be seen from the left bottom corner to right top corner of the Kohonen map. For further investigation, the map needs to be partitioned.
PSO BASED CLUSTERING ALGORITHM
The particle swarm optimization is proposed by Kennedy and Eberhart while attempting to simulate the choreographed, graceful motion of swarms of birds trying to find food [23]. It is a heuristics based optimization approach. Particle swarm optimization (PSO) is a population based evolutionary computation algorithm. The members of the whole population are maintained throughout the search procedure. The potential solution of PSO is named as particles and each one is assigned a randomized velocity. Each particle is flown to the optimal solution in the solution space. PSO does not use the filtering operation such as crossover and/or mutation used in evolutionary type methods [64]. Convergence speed and relative simplicity are the two important features which makes it suitable for solving the optimization problems. Usually PSO is initialized with a population of random solutions. Each particle of PSO explores a possible solution. It adjusts its flight according to its own and its companions flying experience. The personal best position is the best solution found by the particle during the course of flight. This is denoted by pbest (personal best). The optimal solution attained by the entire swarm is gBest (global best). PSO iteratively updates the velocity of each particle towards its pBest and gBest positions efficiently. For finding an optimal or near-optimal solution to the problem, PSO keeps updating the current generation of particles. Each particle is a candidate for the solution of the problem. The whole function is accomplished by using the information about the best solution obtained by each particle and the entire population. Each particle has got a set of attributes such as current velocity, current position, the best position discovered by the particle so far and, the best position discovered by the entire particle so far. Each particle begins with an initial velocity and position. Thereafter a swarm particle-i will update its own speed and in accordance with the following equations:
(5) (6)
In equation (5), w is the inertia weight; r1 and r2 are random numbers within the range {0,1}. Cp is the Cognitive learning rate and Cg is the Social learning rate. Gbest is the best particle found so far and Pbesti is the best position discovered so far by the corresponding particle. [17] The issues related to global and local minimum play an important role when data sets and attributes associated are very large and the classification based on clustering is important and critical. In case of certain data sets like medical, security, finance etc. the error generated because of K- Means clustering algorithm is not acceptable. The objective function of the K-Means algorithm is not convex and hence it may contain many local minima. Bio-inspired algorithms have advantages of finding global optimal solution [31, 38]. The process of random searching and information sharing make these algorithms best tool for finding global solutions w. We have used one of such algorithm i.e. Particle Swarm Optimization (PSO) for data clustering. In this section we aim to propose a hybrid sequential clustering algorithm based on combining the K-Means algorithms and PSO algorithms.
Available online at www.ignited.in Page 5
The motivation for this idea is the fact that PSO algorithm, at the beginning stage of algorithm starts the clustering process due to its fast convergence speed and then the result of PSO algorithm is tuned by the K-Means near optimal solutions. Flow chart of proposed algorithm is shown in Fig. 8. Fig. 8: PSO based clustering algorithm 1: procedure PSO 2: repeat 3: for i = 1 to number of individuals do 4: if G(xi )> G(Pi ) then. % G() evaluates goodness 5: for d = 1 to dimensions do 6: pid = xid % pid is the best state found so far 7: end for 8: end if 9: g = i %arbitrary 10: for j = indexes of neighbors do 11: if G(P j ) > G(Pg ) then 12: g = j % g is the index of the best performer in the neighborhood 13: end if 14: end for 15: for d = 1 to number of dimensions do 16: vid(t) = f(xid(t − 1), vid(t − 1), pid, pgd)% update velocity 17: vid (−Vmax,+Vmax) 18: xid(t) = f(vid(t), xid(t − 1)) %update position 19: end for 20: end for 21: until stopping criteria 22: end procedure The C-means algorithm which is based on the similarity between pixels and the specified cluster centers. The behavior of the C-means algorithm mostly is influenced by the number of clusters specified and the random choice of initial cluster centers. Fuzzy C Means (FCM) is one of the most commonly used fuzzy clustering techniques for different degree estimation problems. It provides a method that shows how to group data points that populate some multidimensional space into a specific number of different clusters. FCM restriction is the clusters number which must be known a priori. FCM employs fuzzy partitioning such that a data point can belong to several groups with the degree of membership grades between 0 and 1 and the membership matrix U is constructed of elements that have value between 0 and 1. The aim of FCM is to find cluster centers that minimize a dissimilarity function. U is the membership matrix, is randomly initialized. In the fuzzy clustering, a single particle represents a cluster center vector, in which each particle Pl is constructed as follows
(7)
l represents number of clusters. Vi is the vector of cth cluster center. 8)
Available online at www.ignited.in Page 6
where 1, 2,…, D are dimensions of cluster center vectors. Therefore, a swarm represents a number of candidates clustering for the current data vector. Each point or data vector belongs to every various cluster by different membership function, thus, a fuzzy membership is assigned to each point or data vector. Each cluster has a cluster center and each iteration presents a solution giving a vector of cluster centers. We determine the position of vector Pi for every particle and update it. We then change the position of cluster centers based on these particles. For the purpose of this algorithm, we define the following notations: n : number of data vector C : number of cluster center position of particle l in stage t : velocity of particle l in stage t Xk : vector of data and k =1, 2, …, n best position funded by all particles in stage t p(t ) : fuzzy pseudo partition in stage t (XK) : membership function of data k vector in stage t into cluster i The fitness of particles is easily measured as follows: Step 1: Let t =0, select initial parameters such as C, initial position of particle, initial velocity of particles, c1, c2, χ, w, a real number a small positive number ε ,and stopping criterion. Step 2: Calculate the for all particles by the following procedure, where i =1, 2,.C; k =1, 2, …, n . For each for all then define Step 3: For each particle calculate the fitness Step 4: Update the global best and local best position. Step 5: Update and for all number of particle as follows Step 6: Update ρ(t+1) by Step 2. Step 7: Compare ρ(t) and ρ(t+1). If p(t+1) − p(t ) ≤ ε , then stop; otherwise, increase t by one and continue form Step 3.
RESULTS AND DISCUSSIONS
Available online at www.ignited.in Page 7
Fig. 9: Training of self-organizing map Fig. 10: Weight positions of SOM Fig. 11: Neighbor weight distance of SOM Fig. 12: SOM pattern of data Fig. 13: Neighbor connection of SOM
Available online at www.ignited.in Page 8
Fig. 14: Weight vectors of SOM Fig. 15: Data points and cluster centers Fig. 16: After SOM based training Fig. 17: Cluster boundary and converged centers of clustering
CONCLUSIONS
This paper describes an efficient clustering method to improve the productivity and efficiency of process plant monitoring, fault detection and isolation. This paper also describes the related work carried out in this field. This paper takes process data and uses SOM to train the data and then cluster the process data. This paper also discusses a hybrid clustering algorithm using PSO and fuzzy C means algorithm.
REFERENCES
[1] Teuvo Kohonen, “The Self-Organizing Map,” Proceedings of IEEE, vol. 78, no. 9, Sep 1990, pp. 1464-1480 [2] Scott C Newton, Surya Pemmaraju, Sunanda Mitra, “Adaptive Fuzzy Leader Clustering of Complex Data Sets in Pattern Recognition,” IEEE Transactions on Neural Network, vol. 3, no. 5, 1992, pp. 794-800 [3] E L Sutanto, K Warwick, “Cluster Analysis for Multivariable Process Control,” Proceedings of American Control Conference, vol. 1, 1995, pp. 749-750 [4] Anil K Jain, M N Murty, P J Flynn, “Data Clustering: A Overview,” ACM Computing Surveys, vol. 31, no. 3, 1999, pp. 265-323 [5] Juha Vesanto, Johan Himberg, Esa Alhoniemi, Juha Parhankangas, “Self-organizing map in Matlab: the SOM Toolbox,” in Proceedings of the MATLAB DSP Conference, Nov 1999, pp. 35-40
Available online at www.ignited.in Page 9
[6] Andrea Baraldi, Palma Blonda, “A Survey of Fuzzy Clustering Algorithms for Pattern Recognition-Part II,” IEEE Transactions On Systems, Man, And Cybernetics—Part B: Cybernetics, vol. 29, no. 6, Dec 1999, pp. 786-801 [7] Ujjwal Maulik, Sanghamitra Bandopadhyay, “Genetic Algorithm Based Clustering Technique,” Pattern Recognition, vol. 33, 2000, pp. 1455-1465 [8] Anil K Jain, Robert P W Duin, Jianchang Mao, “Statistical Pattern Recognition: A Review,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 22, no. 1, 2000, pp. 4-37 [9] Y M Sebzalli, X Z Wang, “Knowledge Discovery From Process Operational Data Using PCA and Fuzzy Clustering,” Engineering Applications of Artificial Intelligence, vol. 14, 2001, pp. 607-616 [10] Hiroyuki Mori, Atsushi Yuihara, “Deterministic Annealing Clustering for ANN- Based Short Term Load Forecasting,” IEEE Transactions on Power Systems, vol. 16, no. 3, Aug 2001, pp. 545-551 [11] Jang Hee Lee, Song Jin Yu, Sang Chan Park, “Design of Intelligent Data Sampling Methodology Based on Data Mining,” IEEE Transactions On Robotics And Automation, vol. 17, no. 5, Oct 2001, pp. 637-649 [12] D P Solomatine, “Data-driven Modeling: Paradigm, Methods and Experiences,” in Proceedings of 5th International Conference on Hydroinformatics, July 2002, pp. 1-7 [13] Yu Han, Y. H. Song, “Using Improved Self-Organizing Map for Partial Discharge Diagnosis of Large Turbogenerators,” IEEE Transactions On Energy Conversion, vol. 18, no. 3, Sep 2003, pp. 392-399 [14] Chip-Hong Chang, Pengfei Xu, Rui Xiao, Thambipillai Srikanthan, “New Adaptive Color Quantization Method Based on Self-Organizing Map,” IEEE Transactions On Neural Networks, vol. 16, no. 1, Jan 2005, pp. 237-249 [15] Timo Ahvenlampi, Urpo Kortela, “Clustering Algorithm in Process Monitoring and Control Application to Continuous Digester,” Informatica, vol. 29, 2005, pp. 101-109 [16] Fu Qiang, Hu Shang-Xu, Zhao Sheng-Ying, “Clustering Based Selective Neural Network Ensemble,” Journal of Zhejiang University Science, vol. 6A, no. 5, 2005, pp. 387-392 [17] Anurag Sharma, Christian W. Omlin, “Determining Cluster Boundaries Using Particle Swarm Optimization,” International Journal of Mathematical and Computer Sciences 1:2 2005, pp. 82-86 [18] Tang Tianha, Wang Tianzhen, “An ANN-based Clustering Analysis Algorithm with Dynamic Data Window,” 2005 International Conference on Control and Automation (ICCA2005), June 2005, pp. 581-586 [19] Jenn-Hwai Yang, Miin-Shen Yang, “A Control Chart Pattern Recognition System Using a Statistical Correlation Coefficient Method,” Computers and Industrial Engineering, vol. 48, 2005, pp. 205-221 [20] Young-Hak Lee, Hyung Dae Jin, Chonghun Han, “On-Line Process State Classification for Adaptive Monitoring,” Industrial Engineering Chemistry Research, 45, 2006, pp. 3095-3107 [21] Jun-Hai Zhai, Su-Fang Zhang, Xi-Zhao Wang, “An Overview of Pattern Classification Methodologies,” Proceedings of the 5th International Conference On Machine Learning And Cybernetics, Aug 2006, pp. 3222-3227 [22] Skrjanc I., “Fuzzy Model Based Detection of Sensor Faults in Waste Water Treatment Plant,” in Proceedings of 5th WSEAS International Conference on Computational Intelligence, Man-Machine Systems and Cybernetics, 2006, pp. 195-199 [23] Surat Srinoy, Werasak Kurutach, “Combination Artificial Ant Clustering and K-PSO Clustering Approach to Network Security Model,” in Proceedings of 2006 International Conference on Hybrid Information Technology, 2006 [24] Adrian Costea, “The Analysis of the Telecommunications Sector By the Means of Data Mining Techniques,” Journal of Applied Quantitative Methods, vol. 1, no. 2, Winter 2006, pp. 144-150 [25] Yi-Tung Kao, Erwie Zahara, I-Wei Kao, “A Hybridized Approach to Data Clustering,” in Proceedings of the 7th Asia Pacific Industrial Engineering and Management Systems Conference 2006, Dec 2006, pp. 497-504 [26] Sherin M Youssef, Mohamed Rizk, Mohemad El- Sherif, “Dynamically Adaptive Data Clustering Using Intelligent Swarm-like Agents,” International Journal of Mathematics and Computers in Simulation, vol. 1, issue 2, 2007, pp. 108-118
Available online at www.ignited.in Page 10
[27] C Lionberger, M Cromaz, “Control of Acquisition and Cluster Based Online Processing of Gretina Data,” Proceedings of ICALEPCS 07, 2007, pp. 93-95 [28] Zhe Song, Andrew Kusiak, “Constraint Based Control of Boiler Efficiency: A Data Mining Approach,” IEEE Transactions on Industrial Informatics, vol. 3, no. 1, Feb 2007, pp. 73-83 [29] Gursewak S. Brar, Yadwinder S Brar, Yaduvir Singh, “Implementation and Comparison of Contemporary Data Clustering techniques for Multi Compressor System: A Case Study,” WSEAS Transactions on Systems and Control, no 9, issue 2, 2007, pp. 442-449 [30] D. T Pham et.al, “Data Clustering Using Bees Algorithm,” Proceedings of 40th CIRP International Manufacturing Systems Seminar, 2007 [31] Jeng-Ming Yih, Yuan-Horng Lin, Hsiang-Chuan Liu, “Clustering Analysis Method Based on Fuzzy C-Means Algorithm of PSO and PPSO with Application in Real Data,” International Journal of Geology, issue 4, vol. 1, 2007, pp. 89-98 [32] Robert P.W. Duin, Elzbieta Pekalska, “The Science of Pattern Recognition. Achievements and Perspectives,” Studies in Computational Intelligence, vol. 63, 2007, pp. 221–259 [33] Rodolfo V. Tona V., Antonio Espuña, Luis Puigjaner, “Exploring and Improving Clustering based Strategies for Chemical Process Supervision,” 17th European Symposium on Computer Aided Process Engineering ESCAPE17, 2007, pp. 1-6 [34] Andrew Kusiak, Zhe Song, “Clustering-Based Performance Optimization of the Boiler–Turbine System,” IEEE Transactions On Energy Conversion, vol. 23, no. 2, Jun 2008, pp. 651-658 [35] Derek T. Anderson, Robert H. Luke, James M. Keller, “Seedup of Fuzzy Clustering Through Stream Processing on Graphics Processing Units,” IEEE Transactions on Fuzzy Systems, vol. 16, no. 4, Aug 2008, pp. 1101-1106 [36] Xiaohui Cui, Justin M. Beaver, Jesse St. Charles, Thomas E. Potok, “Dimensionality Reduction Particle Swarm Algorithm for High Dimensional Clustering,” in Proceedings of 2008 IEEE Swarm Intelligence Symposium, Sep 2008 [37] Osama Abu Abbas, “Comparisons Between Data Clustering Algorithms,” The International Arab Journal of Information Technology, vol. 5, no. 3, 2008, pp. 320-325 [38] K Premalatha, A M Natarajan, “A New Approach for Data Clustering Based on PSO with Local Search,” Computer and Information Science, vol. 1, no. 4, 2008, pp. 139-145 [39] E. Mehdizadeh, S. Sadi-Nezhad, R. Tavakkoli- Moghaddam, “Optimization of Fuzzy Clustering Criteria By A Hybrid PSO And Fuzzy C-Means Clustering Algorithm,” Iranian Journal of Fuzzy Systems, vol. 5, no. 3, 2008, pp. 1-14 [40] T. Niknam, M. Nayeripour, B.Bahmani Firouzi, “Application of a New Hybrid Optimization Algorithm for Cluster Analysis,” World Academy of Science, Engineering and Technology, vol. 46, 2008, pp. 589-594 [41] E T Venkatesh, P Thangaraj, “Self-Organizing Map and Multilayer Perceptron Neural Network Based Data Mining To Envisage Agriculture Cultivation,” Journal of Computer Science, vol. 4, issue 6, 2008, pp. 494-502 [42] Mohsen Lashkargir, S Amirhassan Monadjemi, Ahmad Baraani Dastjerdi, “A Hybrid Multi-Objective Particle Swarm Optimization Method to Discover Biclusters in Microarray Data,” International Journal of Computer Science and Information Security, vol. 4, no. 1 & 2, 2009 [43] Birendra Biswal, P. K Dash, B. K Panigrahi, “Power Quality Disturbance Classification Using Fuzzy C Means Algorithm and Adaptive Particle Swarm Optimization,” IEEE Transactions on Industrial Electronics, vol. 56, no. 1, Jan 2009, pp. 212-220 [44] Kumar Dhiraj, Santanu Kumar Rath, “Comparison of SGA and RGA Based Clustering Algorithm for Pattern Recognition,” International Journal of Recent Trends in Engineering, vol 1, no. 1, May 2009, pp. 269-273 [45] Ying-Xin Liao, Jin-Hua She, Min Wu, “Integrated Hybrid PSO and Fuzzy-NN Decoupling Control for Temperature of Reheating Furnace,” IEEE Transactions On Industrial Electronics, vol. 56, no. 7, Jul 2009, pp. 2704-2714 [46] Hsiang-Chuan Liu, Bai-Cheng Jeng, Jeng-Ming Yih, Yen-Kuei Yu, “Fuzzy C-Means Algorithm Based on Standard Mahlanobis Distances,” in Proceedings of the 2009 International Symposium on Information Processing (ISIP’09), 2009, pp. 422-427
Available online at www.ignited.in Page 11
[47] V K Panchal, Harish Kundra, Jagdeep Kaur, “Comparative Study of Particle Swarm Optimization Based Unsupervised Clustering Techniques,” International Journal of Computer Science and Network Security, vol. 9, no. 10, Oct 2009, pp. 132-140 [48] S Vijayachitra, A Tamilarasi, M Pravin Kumar, “MIMO Process Optimization Using Fuzzy GA Clustering,” International Journal of Recent Trends in Engineering, vol. 2, no. 2, Nov 2009, pp. 16-18 [49] Mrutyunjaya Panda, Manas Ranjan Patra, “Ensemble Voting System for Anamoly Based Network Intrusion Detection,” International Journal of Recent Trends in Engineering, vol. 2, no. 5, Nov 2009, pp. 8-13 [50] Andrea Paoli, Farid Melgani, Edoardo Pasolli, “Clustering of Hyperspectral Images Based on Multiobjective Particle Swarm Optimization,” IEEE Transactions On Geoscience And Remote Sensing, vol. 47, no. 12, Dec 2009, pp. 4175-4188 [51] Tarek AROUI, Yassine KOUBAA, Ahmed TOUMI, “Clustering of Self-Organizing Map Based Approach in Induction Machine Rotor Faults Diagnostics,” Leonardo Journal of Sciences, issue 15, Jul-Dec 2009, pp. 1-14 [52] Amit Jain, B. Satish, “Short Term Load Forecasting By Clustering Techniques Based on Daily Average and Peak Loads,” in Proceedings of 2009 IEEE Power Energy Society General Meeting, 2009 [53] Anil K Jain, “Data Clustering: 50 Years Beyond K Means,” Pattern Recognition Letters, 31, 2010, pp. 651-666 [54] V Kavitha, M Punithavalli, “Clustering Time Series Data Stream- A Literature Review,” International Journal of Computer Science and Information Security, vol. 8, no. 1, 2010, pp. 289-294 [55] Izakian, H., Abraham, A., “Fuzzy C-Means and Fuzzy Swarm for Fuzzy Clustering Problem,” Expert Systems with Applications, 2010, doi: 10.1016/j.eswa.2010.07.112 [56] S Kalyani, k S Swarup, “Supervised Fuzzy C Means Clustering Techniques for Security Assessment and Classification of Power System,” International Journal of Engineering, Science and Technology, vol. 2, no. 3, 2010, pp. 175-185 [57] Xian-Xia Zhang et.al, “Spatially Constrained Fuzzy Clustering Based Sensor Placement for Spatiotemporal Fuzzy Control System,” IEEE Transaction on Fuzzy Systems, vol. 18, no. 5, 2010, pp. 946-957 [58] Mika Liukkonen et.al, “Analysis of Flue Gas Emission Data From Fluidized Bed Combustion Using Self-Organizing Map,” Applied Computational Intelligence and Soft Computing, Hindawi Publishing Corporation, 2010, pp. 1-8 [59] Vasil Simeonov et.al, “Lake Water Monitoring Data Assessment By Multivariate Statistics,” Journal of Water Resource and Protection, vol. 2, 2010, pp. 353-361 [60] Ibrahim Massod, Adnan Hassan, “Issues in Development of ANN-Based Control Chart Pattern Recognition Schemes,” European Journal of Scientific Research, vol. 39, no. 3, 2010, pp. 336-355