Statistical Process Control in Service Industries: Unveiling the keys to quality Management
 
Archana K.1*, Dr. Sudesh Kumar2
1 Research Scholar, Sunrise University, Alwar, Rajasthan, India
Email: archanaaneeshc1@gmail.com  
2 Professor, Dept.of  Statistics, Sunrise University, Alwar, Rajasthan, India
Abstract - In the service industries, maintaining a consistent level of quality is crucial for both client satisfaction and competitive advantage. Statistical Process Control (SPC) provides a systematic approach to monitoring and controlling processes, ensuring that quality criteria are met and maintained. This abstract will examine the application of SPC in several service industries and how it has contributed to raising the bar for process efficiency and service quality. The use of statistical process control (SPC) techniques is on the rise in the service industry, despite the methodologies' traditional association with manufacturing. According to this study, SPC is highly beneficial for service industries including healthcare, banking, and hospitality. We demonstrate the usefulness of SPC in identifying variations, reducing errors, and improving service quality through the examination of case studies and real data. Based on the findings, SPC is useful for making decisions in real-time, which opens the door to proactive management of service quality. It promotes a growth mindset and operational excellence as well. The research emphasizes the importance of training and the dedication of both personnel and management in order to successfully integrate SPC into service operations. The potential ways in which SPC could transform service sectors are outlined in this overview. Customer satisfaction and company productivity are both enhanced as a result of the systematization and data-driven approach to quality management that is promoted.
Keywords: Process, Industries, Quality Management
INTRODUCTION
In the extremely competitive market of today, the service industry is under a tremendous amount of pressure to provide services that are of a high quality and consistent in order to meet and surpass the expectations of their customers. Service businesses sometimes struggle with variability and intangible aspects, which make quality management more difficult to achieve. This is in contrast to manufacturing, where quality control processes have been established and honed for a long time. It is in this context that Statistical Process Control (SPC) emerges as an indispensable instrument, providing a method that is both data-driven and structured for the purpose of monitoring and enhancing service quality. A wide variety of statistical methods and tools are included in the statistical process control (SPC) framework. These methods and tools are designed to monitor, control, and enhance processes through the analysis of data. The theory and methods of statistical process control (SPC) were initially created for use in industrial environments; nevertheless, they have been demonstrated to be equally useful and advantageous in service settings. It is possible for service organizations to gain more consistency, identify and minimize variability, and cultivate a culture of continuous improvement through the implementation of statistical process control (SPC). Within the scope of this article, the application of statistical process control (SPC) is investigated within a variety of service industries, such as healthcare, banking, hospitality, and retail. Through an in-depth examination of case studies and empirical data, we will investigate the ways in which statistical process control (SPC) methods, such as control charts, process capacity analysis, and root cause analysis, can be modified to address the specific issues that are associated with service processes. To demonstrate that SPC not only contributes to the maintenance of excellent service quality but also improves operational efficiency and the level of happiness experienced by customers is our primary objective. To begin, we will provide an outline of the principles of statistical process control (SPC) and their conventional applications in the manufacturing industry. Following that, we will talk about the quality problems that service sectors confront as well as the distinctive qualities that they possess. At the heart of this investigation will be the presentation of comprehensive case studies that illustrate the effective implementation of SPC in a variety of service sectors. This will be followed by an analysis of the advantages and potential challenges that may arise. In conclusion, we will provide recommendations for service businesses who are interested in incorporating SPC into their quality management strategy. The purpose of this research is to illustrate the revolutionary potential of statistical process control (SPC) in service companies by bridging the gap between the manufacturing and service services sectors. The adoption of statistical process control (SPC) can result in considerable enhancements to service delivery, operational excellence, and competitive advantage, which ultimately contribute to the ongoing success of business organizations.
What exactly is meant by the term "statistical process control"?
The term "statistical process control," more commonly abbreviated as "SPC," refers to a technique that employs statistical analysis to monitor and evaluate quality, ultimately leading to an improvement in the manufacturing process. It is generally accepted that this is the definition of SPC. Manufacturers collect data that is of a high quality and is acquired in real time. This data is collected by manufacturers. Various pieces of apparatus and technologies are utilized in order to acquire the measurements that are taken of the process or the product themselves. Following that, the data that was gathered is utilized in the process of monitoring, evaluating, and regulating the manufacturing technique of the product using the information that was obtained. It is possible to gain the capability of determining whether or not the operations of a factory are operating at their maximum potential by collecting the relevant data and displaying it in the form of graphs and charts. This is a capability that may be acquired. As a result of highlighting the areas that require improvement, statistical process control (SPC) assists businesses in reducing the amount of waste, delays, and the likelihood of producing faulty items. This is accomplished by identifying the areas that require improvement.
Relevant theories of statistical process control
The theoretical underpinning for statistical process control is comprised of the fields of probability theory and mathematical statistics. According to our knowledge of probability theory and mathematical statistics, we are aware that the quality characteristic values that are measured in the steady state of the process, which are also referred to as the quality indexes, approximately obey the normal distribution. This is something that we are aware of. In the normal distribution, all of the values have a tendency to follow a certain value, but there is still a range of values present. This is the defining characteristic of the normal distribution. In quality control, the normal distribution is widely used since it possesses a characteristic that is frequently used. The probability of falling inside the range of [μ-3σ,μ+3σ] is one hundred and seventy-three percent, regardless of the method by which μ and σ are computed. Detailed information regarding the outcome that was accomplished by the application of rigorous probability computing may be seen in Figure 1.
Figure 1: Properties of normal distribution.
Therefore, the probability of falling outside the range of
and the probability of falling on the side greater than
or less than
On the basis of this characteristic of the normal distribution, Shewhart developed the Shewhart control chart, which is often referred to as the conventional control chart. Manufacturing companies implement a concept known as CPK in order to immediately determine whether or not the real process satisfies the standards of tolerance when they use statistical process to regulate quality attributes.
 
where the upper and lower specification limits for quality qualities are denoted by the USL and USL, respectively. The greater the CPK, the more stable the manufacturing process is, and the more ample the capacity is. There is a direct correlation between the two. In order to attain product quality, process quality control serves as both the foundation and the guarantee. Process capacity, which refers to the real process capacity of the process under stable statistical control condition, is typically used to measure it. Process capacity index CPK is frequently used to convey this concept. CP is an abbreviation for the capability of predicting the process, which reflects the extent to which the entire process satisfies the specification.
When the process capability is 1.33, the process capability is CP 1. The process capacity is insufficient; when the process capability is 1.67, the process capability is an adequate level. In the context of economic success, the CP is deemed acceptable. However, when there is a 1.33 quick development in the number of goods and high-tech flourishingly generated today, the original quality requirements and process standards are no longer applicable to the present demand. In the event that the process quality level hits six, the capability index must be equal to two. As an evaluation indicator of process quality, there is no question that the higher the CP value is, the greater the processing quality is. However, the higher the CP value is, the higher the requirements on equipment, operators, and other elements are, and the higher the manufacturing cost is. Because of this, it is necessary to identify an acceptable CP value by doing a thorough review of both the economy and the technology. When the parameter for the deviation degree is equal to zero, the CPK index is equal to the CP. The smaller the deviation degree, the less difference there is between the sample data, and the improvement in the CPK index. The CPK indicator, on the other hand, is in a worse position. The fact that CP and CPK are dictated by the magnitude and mean deviation of the process fluctuations in a statistically in control condition is something that should be underlined in particular. As a result, it is essential to initially ascertain whether or not the process is going through a regulated operation. Utilizing statistical techniques such as control charts is necessary in order to do this.
STATISTICAL PROCESS CONTROL
According to Shewhart (1931), control charts are instruments that make predictions about the future of a phenomena by making use of the experiences that have occurred in the past. The use of control limitations will be responsible for determining an approximation of the chance that the occurrence will be noticed. The idea that "if the process is in a state of statistical control, then the outcomes are easily predicted" is the foundation upon which control charts are constructed.
Control charts are used for two primary reasons, which are as follows;
  1. For the purpose of preserving the standard of the procedure and
  2. To Determine whether or not a process is in a condition of statistical control by analyzing the data.
Observations (samples) are grouped in accordance with time or some other statistical description when control charts are utilized. Following that, the mean, variances, range, and standard deviation are computed for each particular collection of observations that were made. Through the utilization of the Central line, Upper limits, and Lower limits, one is able to design the control chart. The middle line is a representation of the average value of the quality characteristic, while the upper and lower limits are used to determine the boundaries within which the process is essentially in control. The process is considered to be out of control if the data points are found to be outside of these limit lines, which are also known as the upper and lower limits. It is reasonable to anticipate that the data points will be spread evenly between the control boundaries in order for a process to be considered under control. If the data points are located on one side of the centerline, then the process is considered to be out of control since it is obvious that there is a systematic variance. This is because the centerline contains the data points. Control limits are not requirements or aims for the process; rather, they are forecasts of the variation that happens as a result of the variance. The first graphic is a straightforward control chart.
Figure 2: Simple control chart
An effective control chart is one that is able to identify any effects that result in unique sources of variation, but it must not result in the development of erroneous out-of-control signals. This is the defining characteristic of a successful control chart. It has been determined that control charts are necessary in order to adhere to specific characteristics connected with quality. There are two primary types of control charts, and they are differentiated accordingly according to whether the data being analyzed is variable or attribute.
OBJECTIVES
  1. To conduct research on the various measures and indicators used to evaluate the quality of health care
  2. To investigate Management's Role in Quality Assurance Control
RESEARCH METHOD
Permit me to introduce you to the Chen technique. A monitoring approach is implemented at a hospital that has around 250 births per month and has a baseline rate of 0.0009 abnormalities per month. Depending on the values of n and h0 k, the sequence that has been provided will either trigger an alert or not. Assuming that the system is capable of detecting a rise rate that is five times higher than the typical rate, the assumption is made. After the data were analysed, g was set to a value of 5. For the data that has been provided, the values of h0 k, n, P0, and P1 have been determined.
Table 1: The Sets Method is used to determine the parameters and probability values.
Name of the Malformation
k0
N
P0(A)
P1(B)
Cleft palate/lip
1024
5
0.0792
0.9512
 
It is possible to determine the value of n and P0 by the use of an iterative process. Calculations were made to determine the P0 value for a variety of n values. A calculation of the n value was performed by using this P0 value once again. In the table that follows, the values for n and P0 are tabulated as follows:
Table 2: Iterative values of n and P0
n
P0
n
P0
1
0.0588
6
0.0833
2
0.0624
5
0.07688
3
0.0666
5
0.07688
4
0.0714
5
0.00760
5
0.7688
5
0.07688
 
We get P0 = 0.7688 when n is equal to 5. When we use this value for P0, we get the result n=5. The value is thus 5. To put it another way, an alert would be triggered after the arrival of five sets in a row, each of which was less than Kh0 or 1024 birth combinations. The value of P1 (B) is equal to 0.9512, which suggests that there is a possibility of detecting an elevated rate that is five times higher than the usual rate after a series of five consecutive sets. An alert would be triggered in the event that the size of the five successive set values is less than the threshold dimension of the set, which is calculated to be 1024. The CUSUM control chart approach is something that should be addressed. Both the baseline rate and the average time duration are taken into consideration when determining the values of the parameters K and H. In this investigation, the K and H values are calculated by making use of the tables that were published by Ewan and Kemp in 1960. Given the information, it is advised that K = 1 and H = 3 be used. When this parameter is applied to the data that has been provided, the out of control condition is not signalled. mainly due to the fact that there is no point that does not surpass the H=3 value. Figures 3 and 4 provide a visual illustration of the Sets technique and the CUSUM control chart, respectively. These methods are described in the figures. A representation of the size of the sets is provided in Table 3.3, and the CUSUM values are shown in Table 4.
Figure 3: Graphical representation for sets method
Table 3: Set size of cleft lip and palate malformation
Set Number
Set Size
Set Number
Set Size
1
1686
17
101
2
1544
18
182
3
2244
19
1176
4
1773
20
2446
5
1797
21
489
6
106
22
1299
7
3318
23
1192
8
174
24
733
9
567
25
134
10
114
26
429
11
4145
27
867
12
1666
28
118
13
120
29
144
14
568
30
834
15
1533
31
172
16
544
32
1712
 
Figure 4: Graphical representation for CUSUM method
RESULTS
With regard to the data that we have gathered, our objective is to identify a rate that is six times higher than the typical rate. During the specified time period, a monitoring strategy is implemented at a hospital that provided care for roughly 25,000 babies. An average of 0.0008 malformations are seen per month, which is the baseline rate. A comparison was made between the three different monitoring systems in order to determine the change in the data about congenital malformations. The proper approach has been chosen from among the many monitoring methods, and this decision was made based on which method demonstrates the genuine alert in the early phase. Presented in the following order, the results are explained at length:
Initially, In order to analyse the data on cleft lip and palate malformations that were provided, the sets approach was used, and the findings are explained. Sets approach is based on the number of observations that have occurred after failure up to and including the following failure. This number is denoted by the letter 'X'. In the event that this set is discovered to be less than or equal to T on n separate times, the alarm will be triggered, and the process will be considered to have transitioned from being under control to being within control. In light of this, the sequence that has been provided will, depending on the values of T and n, trigger an alert. There is a possibility that an iterative process may be used to compute the value of n. Initially, it is assumed that n equals one, and then the value of po is computed to be 0.046. In addition, the n value is determined to be 5 by using this p0value once again. The po value is reevaluated as 0.056 using this n value as the data source. In order to determine the various values of n, this iterative approach is repeated. All of the iterative values of p0 and n are included in the table that is presented below.
Table 4 : Iterative values of n and po
n
p0
n
p0
1
0.046
5
0.056
2
0.048
5
0.056
3
0.050
5
0.056
4
0.053
5
0.056
5
0.056
5
0.056
 
It can be seen from the data shown in Table 4 that the values of po are comparable for the n=5 group. As a result, the value of n reaches 5. To put it another way, an alert would be triggered not long after the emergence of five consecutive sets, each of which was lower than the threshold value T. The table that follows presents the values of two parameters, T and n, as well as the probability of a false alarm (po) and the probability of a real alert (p1) when the rate is raised by six times compared to the regular rate.
Table 5: The values of the parameters and the probability based on the sets technique
A graphical illustration of the Sets technique applied to the data that was provided may be seen in Figure 5.In the case of the Sets approach, an alert would be triggered not long after the arrival of five consecutive sets, each of which was less than 960 respectively. From the information shown in Figure 5, it is clear that there is no alert that has taken place.
Figure 5: Graphical representation of sets method
In the second step of the process, the CUSCORE technique was used to the data on cleft lip and palate malformations, and the outcomes and discoveries are addressed. The CUSCORE technique and the sets method are both defined according on the amount of time that has passed between successive diagnosis. In the CUSCORE approach, the provided sequence will sound an alert depending on the values of n and the threshold value T, and the value of T relies on the reference value k. This was described before in the paragraph. Using the iterative process that has been provided, it is possible to determine the values of n and k. Table 6 reveals that the predicted out of control delay associated with n = 3 is greater than the delay associated with n = 2, as can be seen and understood. Therefore, the iterations have come to a halt at this point. Therefore, the best possible alternative for the number of n is 2, and the value of k is 0.22. Because of this number, the predicted delay of E1 (N) is 2.12, which is over the acceptable range.
Table 6: Values that are iterative for the best possible combination of n and k
 
n= 1
n=2
n=3
p0(A)
0.03
0.19
0.39
k
0.04
0.22
0.48
p1(A)
0.21
0.79
0.91
E1(N)
3.09
2.12
3.01
 
The following table displays the value of n, k and T.
Table 7: The values of the parameters used in the CUSCORE technique
The CUSCORE approach is shown in Figure 6 above for the data that was obtained. For the CUSUORE approach, an alert will be triggered by the specified sequence whenever the CUSCORE statistic value is larger than 2. Based on the data shown in Figure 6, it is clear that the observation numbers 5, 11, 13, 15, 17, 18, 19, 20, and 21 are higher than the boundary value.
Figure 6: Graphical representation of CUSCORE method
Finally, the Bernoulli CUSUM control chart has been applied to the data on malformations of the cleft lip and palate. It was determined that the Bernoulli CUSUM statistic values were computed by using the Bernoulli CUSUM Statistic value that was provided, in addition to the reference value of the supplied p0 (0.0008) and p1 (0.0048). The Bernoulli CUSUM control chart for the data that was gathered and obtained is shown in Figure 7. Graphical representations of the Bernoulli CUSUM are more straightforward to understand. When attempting to comprehend the Bernoulli CUSUM control chart, it is necessary to first examine the slop of the chart.
Figure 7: Graphical representation of Bernoulli CUSUM control chart
Based on the information shown in Figure 7 it is possible to deduce that the out of control signal is generated if the birth number exceeds the control limit value of h = 2. From the 2001th observation to the 2480th observation, the Bernoulli CUSUM control chart indicates that the data is out of control. Additionally, the chart indicates that the data is out of control from the 6602nd observation to the 6763rd observation. When using this strategy, the first alert is shown at the 2001th observation. A comparison between the Bernoulli CUSUM control chart and the CUSCORE control chart reveals that the out of control signal is sent in advance. On account of the fact that the CUSCORE control chart triggers the first alert at the 6703rd observation, but the Sets Method never triggers any alarm at all. Accordingly, it may be deduced that the Bernoulli control charts are more sensitive than the CUSCORE and Sets Method methodologies.
CONCLUSION
The purpose of this study is to get a knowledge of the applications of statistical process control (SPC) in the fields of health science and industry. The techniques of control charts have been used in our research in a variety of ways. This chapter serves as a sequel, and it contains a summary of the outcomes. In Chapter III, a comparison was done between the Sets technique and the Poisson CUSUM control chart, with the help of data pertaining to cleft lip and palate malformations. After analysing the data, it was determined that the Sets approach provided superior performance compared to the CUSUM control chart. In light of the fact that the Sets method triggers an alert after the completion of five consecutive sets with a size that is less than 1024 (that is, after the 23rd set, which is after the birth of 28, 784), the CUSUM control chart never triggers an alarm to indicate an increase in the rate. According to the findings of this research, the Sets technique is more effective than the CUSUM control chart when it comes to identifying rapid shifts from the normal incidence of the cleft lip and palate congenital deformity to an elevated rate. The purpose of such an examination is to look for evidence that might lead to the particular reason or factors that are responsible for the greater risk of cleft palate and lip deformity. In addition, a system of this kind may be developed for the purpose of monitoring any other uncommon illness. The findings of this research raise awareness among the general public about the rising prevalence of cleft lip and palate deformity for children. The performance of the three control chart approaches for monitoring the rising rate of a rare occurrence was described in Chapter-IV. These methods were used to monitor the situation.It is possible that a relatively substantial rise in uncommon illnesses are associated with a limited number of instances, and that these cases may not be readily recognised.
REFERENCE
  1. Adeoti A. Olatunde (2019). On the importance of statistical process control in health care, Research Journal of Medical Sciences, 3(2), 87-90.
  2. Alemi, F., Rom, W., and E. Eisenstine (2016). Risk-adjusted control charts for health care assessment, Annals of Operations Reasearch, 67, 45-60.
  3. Alipour, H., and R. Nooroossana (2010). Multivariate fuzzy exponentially weighed moving average control charts, International Journal Advance Manufacturing Technology, 48, 1001-1007.
  4. Alireza Faraz, Baradaran Kazemzadeh, R., Bameni Moghadam, M., and Aliasghar Bazdar (2019). Constructing a fuzzy Shewhart control chart for variables when uncertainty and randomness are combined, Qualitative and Quantitative, 44, 905-914.
  5. Alwan, L.C (2016). Cusum Quality Control-Multivariate Approach, Communication in Statistics-Theory and Methods, 15, 3531-3543.
  6. Amirzadeh, A., Mashinchi, M., and A. Parchami (2019). Constructing of p-chart using degree of nonconformity, Information Science, 179, 150-160.
  7. Amirzadeh, V., Mashinchi, M., and M.A Yaghoobi (2018). Construction of Control charts using fuzzy multinomial quality, Journal of Mathematics and Statistics (4), 26-31.
  8. Antoine Duclos and Nicolas Voirin (2010). The p-control chart: a tool for care improvement, International Journal for Quality in Health Care, 5(22), 402-407.
  9. Aparisi, F., and C.L Haro (2001). Hotelling’s T2 Control Chart with Variable Sampling Intervals, International Journal of Production Research, 39(14), 3127-3140.
  10. Apley, D.W., and F. Tsung (2012). The Autoregressive T 2 Chart for Monitoring Univariate Autociorrelated Processes, Journal of Quality Technology, 34, 80-96.
  11. Arnold Jesse, C., and R. Jr. Reynolds Marion (2001). CUSUM Control with Sample Size and Sampling Intervals, Journal of Quality Technology, 33(1), 66-81.
  12. Axelord, D.A., Guidinger, M.K., Metzger, R.A., Wiesner, R.H., Webb, R.L., and R.M Merion (2016). Transplant centre quality assessment using a continuously updateable, risk-adjusted technique (CUSUM), American Journal of Transplantation, 6, 313-323.
  13. Barbujani, G (2017). A review of statistical methods for continuous monitoring of malformation frequencies, European Journal of Epidemiology, 3, 67-77.
  14. Barbujani, G., and E. Calzolari (2016). Comparison of two statistical techniques for the surveillance of birth defects through a Monte Carlo simulation, Statistics in Medicine, 3, 239- 247.
  15. Barbujani, G., Ceccherini, I., and A. Russo (2016), Surveillance of birth defects: the Multicommunity Sets Technique tested by computer simulation, European Journal of Epidemiology, 2(1), 52-62. 16. Barnard, G.A (1959). Control charts and Stochastic Processes, Journal of the Royal Statistical Society B, 21, 239-271.