INTRODUCTION
Fault identification and isolation (FDI) is a critical concern in many systems, such as chemical plants and power plants, to maintain the durability, protection, and efficiency of the device as a whole. A fault is described as an unallowable deviation of a function variable or parameter from its usual state. The FDI dilemma is the role of reacting to unexpected events in a method and consists of three steps: deciding whether a fault exists, determining its position, and measuring the size and type of the fault. To that end, a collection of residuals that are responsive to faults but resistant to disruptions and modeling errors is created. In normal settings, these residuals should have a mean of zero. The residuals are then used to draw assumptions on the occurrence of a failure and the kind of fault that happened. Extensive study on FDI methods has already been published in the literature, and they can be divided into two broad categories: model-free methods and model-based methods. Model-based approaches are classified as either quantitative (observer-based or Kalman filter-based methods), qualitative (knowledge-based) (fuzzy methods), or data-driven (neural network-based methods). In a three-part study paper with applications in process chemical engineering, Venkatasubramanian et al. summarized a selection of these approaches. Hwang et al. published a survey paper that focuses primarily on the quantitative model-based approach to FDI. Another quantitative model-based approach used by several researchers is the parity relation method. Kalman filtering (KF) is a well-known recursive technique for estimating state and parameters. It has been shown in that the extended Kalman filtering (EKF) can be used for fault identification and diagnosis in chemical processes. To deal with external disruptions and unpredictable faults, a KF has been suggested for FDI in a continuous stirred tank reactor (CSTR). Saravanakumar et al. have used a bank of KFs to identify and isolate incipient additive faults in DFIG and PMSM Wind Turbine Generators under potential shifts in the reference/disturbance as well as permanent-magnet synchronous motors is suggested (IPMSMs) Knowledge-based and data-driven methods, on the other hand, have gained a lot of coverage in recent years. Soft computational methods, such as fuzzy inference systems (FISs) and neural networks (NNs), can estimate smooth nonlinear functions with arbitrary precision and are crucial in the development of intelligent FDI techniques for nonlinear systems. Because of their quick and stable implementation, success in learning some nonlinear mappings, and pattern recognition capacity, neural networks have been effectively used for FDI purposes. Angeli et al survey.'s paper relies on computational and artificial intelligence FDI approaches. There are many implementations of FIS and NN methods in FDI. Simani et al. suggested a fault detection scheme focused on fuzzy model identification to identify and isolate faults in a wind turbine simulator. Garci developed nonlinear FDI techniques using a multi-layer perceptron neural network (MLPN) as a dynamic approximator using parameter estimation. Benkouider et al. proposed a diagnostic algorithm for batch and semibatch reactors that used the EKF to estimate the reactor's heat transfer coefficient and a probabilistic NN for fault classification. Due to the fact that fault detection (FD) is critical to secure operations, it is a well-studied field in several disciplines. In comparison to an error, a malfunction occurs when a method is unable to execute the necessary functions under pre-specified output parameters. In general, an error is small in comparison to loss, but most errors are caused by missed or undetected faults. Over the last two decades, research activities in many sub-areas related to fault detection have included: determining what kind of prior-knowledge is better to represent the process for FD applications to achieve accuracy in real-time (e.g., first-principles model and historical data, etc.); determining which approach (quantitative or qualitative methods) would result in effective FD; and the more focused ar Excellent FDI feedback can be found.
FAULT-DETECTION AND DIAGNOSIS METHODS
Typically, processes are continuous with discrete time measurements. Faults may be multiplicative (a function of the states and inputs) or additive (dependent on the process and fault dynamics). The latter are usually observed using filtering techniques, while the former are discovered using parameter estimation techniques. A quantitative model-based approach8 is the most often used technique for determining a fault. The inconsistency between real and predicted behaviour, which is a non-normal occurrence signaling a possible error, is expressed as a residual in the field of FD. Confirming a residual necessitates any contrast and redundancy. Redundancies are classified into two types: hardware redundancy and analytical redundancy. The former necessitates redundant sensors, which are constrained by physical limitations and economic costs. Analytical redundancy can be accomplished by examining the functional dependence of process variables, which are typically represented as a series of equations containing the process's states, inputs, and outputs. The essence of analytical redundancy is to equate real machine actions to the model to ensure accuracy. Every inconsistency is used to identify and isolate problems. Taking into account method and measurement sounds, residuals can be close to zero where no fault happens but indicate substantial variation when a fault occurs. The generation of a residual necessitates the use of an explicit, quantitative model. Because of the physical definitions of its parameters, a first-principles paradigm becomes more usable in diagnostic procedures. Although nonlinearity is implicit in most chemical systems, the application of FDI to a nonlinear method employs linearization at the operating point when the fault concerns a defined operating state. Various algorithms have been proposed to achieve analytical redundancies, such as parity relations, using a linear, approximate model11,12. Kalman filters13,14, as well as parameter estimation9,15, are instability. Some researchers have proposed a robust FDI approach16,17 that eliminates the model uncertainty influence. However, in certain situations, suppressing the model instability effect is impossible without affecting the fault-residual effect. To address this issue, some researchers suggest that instead of suppressing all of the instability impact, a mini-max optimization problem be developed in which the model uncertainty is reduced while the sensitivity to the fault is maximized18. The optimization technique may not be feasible for online real-time applications. Measurements of actual systems are regarded as statistical time series, and are a mixture of deterministic system dynamics and stochastic dynamics resulting from unpredictable factors. As a result, expressing the mechanism states in a probabilistic framework is not irrational. That is, the observations have predicted probability distributions while the mechanism is in its nominal state. The method is declared to be out of the nominal state if the predicted distributions are statistically broken. A parametric approach is often used to describe the likelihood distribution, such as the mean and standard deviation. As a result, under unfavorable circumstances, either the mean or the standard deviation will deviate from an optimal range. As a result, fault detection is reduced to a method of comparing the values of these statistical parameters. As process complexity grows, multivariate statistical techniques are needed to concentrate the correlated data such that the critical knowledge is preserved. The primary purpose of multivariate statistical approaches such as theory component analysis (PCA)/partial least squares (PLS) is to reduce a large number of associated method measurements into a smaller set of uncorrelated variables. Statistical classifiers, which cast a classification issue into a classical statistical pattern recognition system, are another promising statistical method. The FDI is realized by integrating over time the classifier's instantaneous predictions utilizing information about the statistical properties of the system's failure modes. Furthermore, the use of artificial neural networks for FDI has resulted in a significant amount of study. In general, artificial neural networks for fault detection can be categorized along two dimensions1-3: network design and learning approach (e.g. supervised and unsupervised). Fault identification and separation features are often used as part of a process supervision/management scheme. FDI, like process control architecture, necessitates computational efficiency to achieve real-time results. The logic of fault detection is to have either an affirmative or negative answer to whether or not a fault exists (s). Although the goal is easy, the path to get there is not. When constructing the FD module, there are two kinds of errors to consider: false alarms (type I) and the failure to identify faults (type II) (type II). 6 A form I error occurs if the FD specification is too responsive to variations from nominal operations. In the other side, if the FD design's threshold is set too high, a flaw can be masked, resulting in a type II mistake. Furthermore, with closed-loop systems, compensating effects such as dynamic feedback which tend to obscure the effects of faults as time passes, so capturing the flawed behavior at or very near to the original time the flaw occurred is critical to prevent masking or compounding effects that may result in catastrophic failures. When a fault (or faults) is found, it is isolated. If the inference of a single flaw is right, either brute force or smart analysis may be used to identify the underlying cause. The former is based on a scheme that thoroughly tests any potentially flawed components; nevertheless, such a method is time consuming. Approaches such as knowledge-based logic, cluster analysis, pattern parametric and nonparametric models are examples of the above. The FDI procedure is divided into two stages: residual generation and decision making. An FDI scheme has the configuration seen in Fig. 1 for a certain series of hypothesized failures. Sensor outputs are initially processed to maximize (if present) the impact of a failure to be remembered. The processed measurements are known as the residuals, and the improved failure impact on the residuals is known as the failure signature. The residuals intuitively reflect the disparity between different functions of the observed sensor outputs and their predicted values in the usual (no-fail) mode. In the absence of a malfunction, residuals can be impartial, demonstrating consistency between actual and predicted usual system behavior; a failure signature usually takes the shape of residual biases that are exclusive to the failure. As a result, residual generation is dependent on awareness of the system's regular behaviour. The sophistication of the actual residual generation method may differ.
Figure 1: Two stage structure of FDI process
Fault identification systems focused on models 19. The job includes the measurement of the available input U(t) and output Y(t) variables for the identification of mechanical faults like actuators and sensors. Basic approaches focused on the process-model are: 1. Observers of state and output (or estimators) 2. Equations of parity 3. Identifying and estimating parameters. They produce residuals for fixed (1) parametric or nonparametric fixed (2) parametric or adaptive parameter models for state variables and/or performance variables (3). The type of fault to be observed is a critical part of these techniques. One may differentiate between additive defects that affect process variables through a combination, or multiplicative defects that are products of process variables. Depending on the form of defects, the simple approaches produce varying effects. If measurement can only be carried out on outputs Y(t), the methods based on the signal model can be used. Vibrations connected to rotary machines or electrical circuits may be observed in particular. Examples of standard fault detection signal-model approaches are: 1. Filters for band pass. 2. Spectral testing (FFT) 3. Estimate of maximum entropy. The characteristic quantities or characteristics of the process of fault detection indicate stochastic behaviour. Abnormal behavior deviations would then be identified using changes identification methods including 2. Baye's decision is a similarity ratio measure. 3. Test run-sum, t-test 2-samples Methods of defective diagnosis If different signs vary with such defects a first approach to determine a flaw is by using classification methods indicating improvements in the symptom vectors. Such ways of grouping are: 1. Approaches and geometric distance 2. Networks of Artificial Neuron 3. Clustering fuzzy. As more detail is needed as diagnostic models on the relationship between signs and defects, reasoning approaches may be used. Diagnostic structures occur in the context in causalities of symptoms-fault, such as symptom-fault trees. The causalities may be then represented according to IF-THEN laws. It is possible to treat both analytical and heuristic symptoms (from the operators). When they are considered ambiguous, probabilistic or fluid definitions contribute to a unified portrayal of the symptoms (Fig. 1). The likelihoods or the possibilities of defects are achieved by forward and backwards thinking through diagnosis. Typical approach types of logic are: 1. Responsibility 2. Fuzzy rationale for possible thinking 3. Artificial neural network justification. This very brief examination reveals that during the past 20 years, several new approaches have been created. It is clear that all of them can be combined. Frequency response approaches are not commonly employed because of their implementation specificity.
DISADVANTAGES
These methods have the following inconveniences, however: 1. These methods cannot diagnose an abrupt failure or a malfunction that rapidly increases. 2. The check loop will adjust for failures, because a failure is undetected in an increasing stage. 3. The upper and lower threshold for a specified performance is a complex challenge. 4. If the threshold range upper and lower is small, the operators can produce too many fake noises, and if the scope is broad, actual defects are difficult to detect.
A good FDD system must have the following qualities–
1. Ability to locate minor faults while the fault is in the preliminary stages of abrupt or initial period activity and early warning. 3. Capacity to differentiate between various device faults. 4. In such a manner that a trade-off between false and missing detection warnings can occur between the top and lower thresholds. 5. Functionality in the face of disruptions and modelling defects in noisy environments. 6. Adaptability to external stimuli and environmental dynamic shifts. 7. Capacity to recognize unknown faults or novel ones that were not part of the FDD analysis. In general, different forms of faults are assumed and patterns for various faults are studied. If there is a good dynamic model, so data may be produced for different defects. Otherwise, several process background data are necessary under usual and various defective circumstances. Any fault probability in our studies is challenging to include, but unknown faults should not be misclassified with any established faults. 8. Several faults can be detected at once but because of the interactive existence of various faults it is challenging 9. Diagnostic classifier modeling parameters can be minimized. 10. The balance between sophisticated algorithms used and storage demands, which are usually less difficult, should be established.
CHALLENGES
Dimensionality reduction
The number of measurable variables is considerably high due to the growing size and sophistication of modern chemicals processes. For instance, an FCCU typically contains more than one thousand calculated variables. The datasets are composed of various variables, rich and large yet low in detail. The irrelevant or redundant characteristics of FDD models have proven to be both more complex and less computer costly. The extraction and placement of features are two major methods for reducing dimensionality. Feature methods for extraction generate a collection of modern features that preserve relevant details by merging original features. Main Component Analysis (PCA), Mind established Partial Regression(PLS) and MDA are widely used as FDD functional extraction techniques. Unlike function extraction, the initial sense of the chosen functionality remains unchanged. Filter-driven strategies like F-statistics, data collecting, shared knowledge and wrapper-based technologies like genetic algorithm (GA) and the artificial neural network (ANN) can be categorized into two categories of feature selection technique. Verron et al. suggested a function reciprocal knowledge methodology
Three defective errors (default #4, #9 and #11) in TE processes are selected to detect. For the purposes of optimizing variable selection in designing a PCA process monitoring model Ghosh et al. have used a genetic algorithm (GA) framework. Shu et al. suggested updated entropy transfer techniques for the collection of features and applied them in a genuine chemical process. The findings indicated that a senior company specialist could define 50% to 90% of characteristic variables in vital alarms. Except for the cases above, however, function selection strategies in the field of chemical FDD are scarcely considered. The methods of feature selection were commonly used to acknowledge patterns and demonstrate great benefits in FDD.
Integration of individual FDD approaches
True chemical process data often has the following properties: non-Gaussian distribution, nonlinearity, time variations, multimodal activity as well as data autocorrelations, which infringes the methodological principles of some FDD methods, such as PCA or PLS, rendering FDD a daunting challenge for real chemical processes. These problems are addressed with the state-of-the-art FDD approaches powered by results. The ICA System and Gaussian Non- Gaussian distribution mixture model, the self-organized map for nonlinearity, discriminatory study of the fishers and the Markov secret model, time and multimodal processes, dynamic locus analysis for dynamic processes] and batch methods. Each solution has its own benefit and drawbacks while a variety of proposals have been suggested (see table1). Under various process constraints, one solution may behave differently. It is a promising concept to merge various FDD methods in order to manage these functional data characteristics concurrently. There has been a movement recently to use hybrid FDD methods to combine each approach's advantages. For e.g., the Combined PCA and ANN; Perk et al. the Combined FDA and PLS; Ghosh et al. also suggested a frame of error-sensing and diagnosis distributed hierarchically.
Adaptive FDD approaches
Industrial operational states also shift, states are vulnerable to changes in raw materials, machine fouling, mechanical part abrasion, changes in catalyst operation, output of various quality grades of products, and changes in the outside world. Only all the potential operational states can be registered in the historical records. Certain adjustments, in particular environmental changes, when modeling and training FDD models are difficult to estimate. Consequently, during their online service, FDD-approaches sometimes get worse. Therefore, the implementation of FDD adaptation methods to improve their online success is an immediate necessity. The number of publications on FDD approaches is various, as mentioned in the introductory part. Most papers, however, use data from simulation processes like the Tennessee Eastman (TE) procedure to verify their FDD approaches, suggesting that efficient implementations of FDD approaches to actual chemical processes are still few and challenging. The key obstacles to realistic use may be inferred as follows: 1) High expert experience criteria and the amount of teaching background samples. Continued information about the processes and strong collaboration between experts and operators is essential to develop and validate model-based FDD approaches. However, given the vast size and sophistication of modern factories, it is unrealistic. Moreover, critical faults, particularly similar faults, do not occur often in realistic situations, and the limited amount of background fault samples cannot meet the criteria for the number of training samples from most FDD-driven procedures. 2) Failure to conduct transfer study FDD. A transformation phase is a period during which control and warning systems are often out of work and failures may arise, for instance start-up and shutdown processes. But FDD becomes more challenging for transformation systems, because of the unpredictable process consistency and the nonlinear and complex characteristics of their datasets. 3) Bad adaptability and self-learning efficiency. External climate, various operators or product quality grades can influence chemical processes etc. Moreover, during online operation new types of defects may occur. The Online FDD solution should possess adequate adaptation and autolearning capabilities to cope with these inestimable shifts. Unfortunately, traditional methods such as PCA and ANN typically require extensive expertise, data and time during adjustments or self-learning. Under these circumstances, more and more academics are trying to use FDD information strategies. Artificial immune systems, for instance, are computational intelligence techniques focused on principles and pathways of biological immune systems. AIS's main concept is to separate oneself from oneself. AIS is able to perform effective FDD even with few historical samples based on clone and algorithms of mutation. In functional implementations, this benefit of AIS allows it really potential. AIDS vaccine transplantation has been applied by Shu et al. so that no historical samples of defects may be diagnosed. In virtual batch chemistry, the Dai et al. suggested FDD-based dynamic time warping [DTW] AIS. In a functional FCCU cell, Shu et al. added AIS to FDD.
CONCLUSION
FDD methods have attracted significant interest from both diverse process sectors and academics to successful process monitoring and therefore a number of useful process monitoring systems like FDD technology have been used and applied in a number of manufacturing processes However, owing to the peculiar features of the FDD approaches there are still many difficulties in implementing true industrial processes (e.g., multivariate, correlation, non-linearity, non- stationarity, multimodality, class imbalance, etc.). Therefore, it is important to look at new, hybrid approaches and to design more intricate FDD models utilizing different intelligent techniques to cross the moderately wide gulf between theoretical approaches and the implementation. and the FDD techniques, their implementations and challenges.
REFERENCES
1. Zhang, C.; Gao, X.; Li, Y.; Feng, L. (2019). Fault Detection Strategy Based on Weighted Distance of k Nearest Neighbors for Semiconductor Manufacturing Processes. IEEE Trans. Semicond. Manuf., 32, pp. 75–81. 2. Zhao, H. (2018). Dynamic graph embedding for fault detection. Comput. Chem. Eng., 117, pp. 359–371 3. Ming, L.; Zhao, J. (2017). Review on chemical process fault detection and diagnosis. In Proceedings of the 6th International Symposium on Advanced Control of Industrial Processes (AdCONIP), Taipei, Taiwan; pp. 457–462. 4. Döhler, M.; Mevel, L.; Zhang, Q. (2016). Fault detection, isolation and quantification from Gaussian residuals with application to structural damage diagnosis. Annu. Rev. Control, 42, pp. 244–256 5. Weyer S., Schmitt M., Ohmer M., Gorecky D (2015). Towards Industry 4.0- standardization as the crucial challenge for highly modular, multi-vendor production systems. IFAC-Papers Online, 48-3: pp. 579-584. 6. Mori, J., Yu, J. (2014). Quality relevant nonlinear batch process performance monitoring using a kernel based multi way non-Gaussian latent subspace projection approach. Journal of Process Control, 24, pp. 57-71 7. Sammaknejad, N., Huang, B., Xiong, W., et al. (2014). Operating condition diagnosis based on HMM with adaptive transition probabilities in presence of missing observations. AIChE Journal, 61(2), pp. 477-493. 8. Yu J., Rashid M. M. (2013). A novel dynamic Bayesian network-based networked process monitoring approach for fault detection, propagation identification, and root cause diagnosis. AIChE Journal, 59(7), pp. 2348-2365 9. Yu, J. (2012). A nonlinear kernel Gaussian mixture model based inferential monitoring approach for fault detection and diagnosis of chemical processes. Computers Engineering Science, 68, pp. 506-519 10. Ghosh, K., Natarajan, S., Srinivasan, R. (2011). Hierarchically distributed fault detection and identification through Dempster-Shafer evidence fusion. Industrial & Engineering Chemistry Research, 50, pp. 9249-9269.
Corresponding Author Kapil Rajput*
Assistant Professor, Department of Mechanical & Chemical Engineering, Galgotias University, Greater Noida, Uttar Pradesh, India