Analyzing the Main Characteristics of Parametric Probability Distribution in Reliability Scenarios

Exploring Key Concepts and Proofs in Reliability Theory

by Devendra Kumar Pandey*,

- Published in Journal of Advances and Scholarly Researches in Allied Education, E-ISSN: 2230-7540

Volume 15, Issue No. 12, Dec 2018, Pages 427 - 436 (10)

Published by: Ignited Minds Journals


ABSTRACT

In this article, we learn about the parametric probability distributions which are every now and again utilized in reliability. This paper will separate the hypothesis of probability, investigate the parts of probability distributions which are significant in reliability hypothesis, just as analyse a few ideas. A way to deal with comprehend the physical establishments for probability circulation is likewise talked about in this article. We additionally talk about the Main Directions of Modern Reliability Theory and further significant distributions are quickly examined. The objective of this paper is to enable the reader to pick up a comprehension of a portion of the key ideas investigated in this hypothesis, to give guides to the reader to attempt, and to incorporate verifications that have been separated for more clear cognizance.

KEYWORD

parametric probability distribution, reliability, theory, probability, distribution, physical foundations, modern reliability theory, hypothesis, concepts, proofs

In this paper, we quickly present the absolute most regularly utilized parametric probability distributions in reliability, limiting ourselves to univariate random quantities and the most essential types of the distributions. Unmistakably progressively itemized introductions of these distributions, with recorded notes, dialogs of properties and applications, and further speculations. Parametric probability distributions are utilized both in stochastic investigations of reliability of systems, where they are generally thought to be completely known and relating properties of the system are broke down, and in statistical surmising, where process information are utilized to appraise the parameters of the appropriation, regularly pursued by a particular derivation of premium. In the last case, one must take care to likewise analyze the suppositions basic the particular parametric conveyance accepted, to guarantee a sensible fit with the exact information. We present some fundamental probability distributions for both continuous and discrete random quantities. For the previous, parametric probability distributions can be particularly determined by means of the probability density function (pdf) f(t), the cumulative dispersion function (cdf) F(t), the survival function S(t), or the hazard rate h(t). For instance, for a non-negative random quantity, as regularly utilized in reliability on the off chance that one is keen on a random lifetime, these functions are connected as pursues: Further we present the Exponential dissemination, and the Weibull and Gamma distributions which are well known models in reliability and which can be considered as speculations of the Exponential dispersion. Some significant distributions for discrete random quantities, in some further significant distributions for reliability are also given. We present these principle distributions in an extremely concise way, in numerous different articles these distributions are utilized, giving instances of their applications.

II. THEORY OF PROBABILITY

The theory of probability formalizes the portrayal of probabilistic ideas through a set of rules. The most well-known reference to formalizing the rules of probability is through a set of axioms proposed by Kolmogorov in 1933. Where 𝐸𝑖 is an event in the event space with 𝑛 various events. When 𝐸1 and 𝐸2 are mutually exclusive yet practically all reliability ideas are characterized dependent on probability as the metric of uncertainty.

Interpretations of Probability

The two most common interpretations of probability are:

• Frequency Interpretation

In the frequentist interpretation of probability, the probability of an event (failure) is defined as: Otherwise called the classical approach, this interpretation expects there exists a genuine probability of an event, 𝑝. The expert uses the watched frequency of the event to gauge the estimation of 𝑝. The more noteworthy events that have happened, the surer the expert is of the estimation of 𝑝. This approach has restrictions, for example when information from events are not accessible (for example no disappointments happen in a test) 𝑝 can't be evaluated and this method can't incorporate "delicate evidence‖ for example, master supposition.

• Subjective Interpretation

The subjective interpretation of probability is otherwise called the Bayesian school of thought. This method characterizes the probability of an event as level of conviction the expert has on the event of event. This implies probability is a result of the examiner's condition of information. Any evidence which would change the investigator's level of conviction must be viewed as while ascertaining the probability (counting delicate evidence). The supposition that is made that the probability appraisal is made by an intelligible individual where any lucid individual having a similar condition of learning would make a similar evaluation. The subjective interpretation has the adaptability of including numerous sorts of evidence to help with assessing the probability of an event. This is significant in numerous reliability applications where the events of intrigue (e. g, system disappointment) are uncommon.

III. PHYSICAL FOUNDATIONS FOR PROBABILITY DISTRIBUTIONS

Since reliability is worried about uncertainty inquiries regarding designing systems, it would bode well to ―derive‖ proper probability distributions dependent on the building material science of a given issue. to do this we have to receive a Bayesian approach to probability. That is, decisions, for example, impassion in respect to certain essential random quantities must be made. For instance, assume we are keen on the pressure, which causes yielding in a given material. Assume besides that we trust Hooke's Law is legitimate for this situation. Beginning with a lack of concern supposition in regards to vectors of mutilation energies with a similar mean, we are directed to the Weibull circulation for worry at yielding with shape parameter equivalent to 2. A disputable figure, Max Mendel, showed up on the reliability scene in 1989. His MIT Ph.D. proposal in Mechanical Engineering concerned probability derivations dependent on building standards. Starting in 1994, Mendel started investigating the utilization of differential geometry to infer probability distributions. This eventually prompted the end that lifetime spaces are not physical Euclidean spaces. The utilization of the hazard slope, for instance, to show multivariate hazard rates is in this manner inaccurate since it depends on the Euclidean metric. Shortle and Mendel argue as follows: Let LN be the space of possible lifetimes for N items. Euclidean space is not a good representation for LN for two reasons: (1) LN has a preferred orientation for its axes. (2) LN has no natural notion of distance See that Euclidean space is invariant under turns, since revolutions protect the estimation of the inward item; for example, there is no favoured direction for the tomahawks. We can describe the physical structure of a space by the changes that leave the space invariant. For Euclidean space, these are interpretations and revolutions. For LN these are changes of units of the individual things. This is on the grounds that physical properties about lifetimes ought not rely upon the units used to gauge lifetimes. In the language of differential geometry, the right portrayal for the space of lifetimes is a gathering of fibber packs.

IV. RELIABILITY THEORY

Reliability theory is basically the utilization of probability theory to the demonstrating of disappointments and the forecast of accomplishment probability. This section abridges a portion of the key focuses in reliability theory. It is expected that the peruser has a starting information of probability theory.

number of its outcomes with respect to the idea of a random variable, its pdf's, and the cdf's. On account of reliability, the random variable of intrigue is the opportunity to disappointment, . We build up the essential connections required by concentrating on the probability that the opportunity to disappointment T is in some interim The above probability can be related to the density and distribution functions, and the results are where and are the cdf and pdf (or the failure density function), respectively. If we divide by in Eq. above and let , we obtain from the fundamental definition of the derivative the fact that the density function is the derivative of the distribution function: Clearly, the distribution function is then the integral of the density function Note this function is equivalent to the probability of failure by time t. Since the random variable is defined only for the interval 0 to (negative time has no meaning), from Eq. (B.8) we can derive One can also define the probability of success at time as the probability that the time to failure is larger than t (that is, ): where is the reliability function. Numerically, Eq. above abridges a large portion of what we have to think about reliability theory. In any case, when we begin to ponder failure information for different things, we find that the density function

Failure rate

A valuable idea in reliability theory to portray failures in a system and its parts is the failure rate. It is characterized as the probability that a failure for every unit time happens in the interim, state, , given that a failure has not happened before t. At the end of the day, the failure rate is the rate at which failures happen in That is, The hazard rate is defined as the limit of the failure rate as the interval approaches zero, that is, . Thus, we obtain the hazard rate at time as The hazard rate is an immediate rate of failure at time t, given that the system makes due up to t. Specifically, the quantity z(t)dt speaks to the probability that a system of age t will flop in the little interim to . Note that despite the fact that there is a slight distinction in the definitions of hazard rate and failure rate, they are utilized conversely.

Main Directions of Modern Reliability Theory

One can distinguish several directions of modern Reliability Theory, main among them are: 1. Quality Control of mass production 2. Pure Reliability analysis  Structural models  Functional models  Maintenance models 2. Effectiveness (―Perform ability) 3. Survivability 4. Safety First two points do not need any explanations: they are subjects of everyday engineering movement. Others would be somewhat clarified. Effectiveness (―perform ability‖) investigation identifies with systems for which one can't detail ―all or nothing sort of failure basis. Viability describes a system capacity to play out its fundamental functions even with partial limit. Failures of a few (or even lion's share of system parts) lead just to continuous debasement of the system capacity to play out its functions/activities. As a matter of fact, one arrangements with such lists like ―partial accessibility or ―Partial system‖ down time. These kinds of models are utilized to portray multi-channel systems (for example media transmission, transportation) or systems with implanted ―functional excess where there are discretionary approaches to perform system assignments, however with diminished quality Fore systems tasks of which are described by its present express, the adequacy index, E, can be communicated as Where Hs is the probability of states, and ϕs is the contingent probability of the system fruitful activity. For systems whose viability of working relies upon the direction of state changing, comparable to recipe can be effectively written in a general structure (however couple of instances of helpful applications are known for this case). It is about an opportunity to make reference to that really the primary thought of this approach was presented in Kolmogorov's work [1945] where he broke down the probability of a plane demolition by against flying machine fire. Obviously, one can decrease adequacy investigation to ―pure‖ reliability examination by picking a suitable failure measure. For example, a system may be considered fizzled if the absolute system ―capacity‖ (or capacity to play out its activity) decreases underneath some foreordained dimension. Survivability is an extraordinary property of a system to ―withstand impacts‖. These effects can be capricious inward failures (as a rule because of administrator blunders), ecological impacts (seismic tremors, floods, typhoons) or antagonistic human instinct activities (foe military tasks or psychological oppressor acts). For this situation one expect that the effects are coordinated to the most basic parts of the system. Survivability investigation is generally performed in minimax terms and decreased to systems, transportation systems). A similar sort of reliability files is connected to military articles that are exposed to statistically capricious effects. The survivability measure is generally communicated in the intensity of set of system units whose pulverization prompts the system ―death‖. One of conceivable portrayal of survivability is the base set of such units: Where X is the set of pulverized system's units (X is an advantageous set). Despite the fact that there is no probabilistic thought, all things considered, one now and again utilizes a few dimensions of ―possibilities of event‖ of obliteration of different units. Safety is an extraordinary property of a system portraying viable presentation of its fundamental predetermines functions (creation of merchandise, electrical power age, gas and oil transportation, and so on.) without perilous ecological ramifications for individuals and nature. Safety is typically considered in probabilistic terms that are near those utilized in a ―pure‖ reliability analysis. In some sense, one considers for this situation two-dimensional model. For example, one can plan the accompanying streamlining issue:

Where Ψ is the system configuration, C is the system cost, R is the system reliability index, and S is the system safety index. Security is now and again considered as a piece of reliability-survivability issue. In reality, numerous systems must operate dependably as well as in the meantime give insurance against non-endorsed get to. Numerous media transmission systems managing military, banking or other exceedingly classified data are mentioned to be secure. Since systems referenced above are really considered fizzled if security isn't given, there is a fascinating gesture of ―two reliabilities. Software Reliability- Now we go to the most confounding zone in reliability theory and practice - the alleged software reliability. Numerous endeavours to apply customary reliability ideas to this subject are ineffective and lead just to some catastrophe. Who could clarify what implies ―MTBF

The appropriate response on these inquiries could be found in the appropriate response: What do you mean under ―software reliability‖ Give us a chance to think about fundamental highlights of reliability: • stochastic nature of failures • time dependence of failures • Independence of failures (or probabilistic dependence) What one has breaking down software? Errors brought about by software have no stochastic nature: they will rehash when a few conditions will be rehashed. Errors of software, it might be said, are not ―objective‖, they rely upon kind of activities, sort of information sources and, finally, on sort of clients. Enable me to contrast software errors and printing errors in the book. Expect that there are many typewriting errors in Chapter 1, and no errors at all in Chapter 2. One uses just Chapter 1 and grumbles that the book is extremely terrible. Another utilization just Chapter 2 and tells everyone that the book is flawless. Errors brought about by software don't rely upon time in a typically reasonable manner: on the off chance that you don't utilize software it can't come up short! After all other options have been exhausted, in this sense software can be contrasted and an extra unit which can be utilized yet no one knows the planning of this use. In any event freedom of errors. There is no such idea as a ―sample‖ for software: there is a marvel of cloning. ―Replacement‖ of ―failed‖ software has no sense! You will transform one Mollie for another Mollie with similar qualities, with a similar sickness, with similar properties Issue of software quality is critical in light of the fact that an ever increasing number of specialized systems become ―software‖ subordinate. It is about an opportunity to state that this issue needs autonomous and escalated consideration of connected mathematicians. In any case, it appears that endeavours to put ―hardware reliability‖ shoes on ―software legs is totally wrong and, besides, will lead just to a logical impasse.

V. NORMAL AND RELATED DISTRIBUTIONS

The Normal appropriation (otherwise called 'Gaussian circulation') is ostensibly the most significant probability dissemination in Statistics, as it happens as the limiting conveyance when a whole of random quantities is considered (Central Limit Theorem). As a rule, it additionally assumes a major job in quantitative hazard evaluation, for example at as it may, it is much of the time utilized as an appropriate probability conveyance for the (normal) logarithm of a random lifetime, in which case the lifetime's circulation is called 'Lognormal'. It is additionally related to the Inverse Gaussian dissemination, which is significant in certain procedures in reliability.

Normal distribution

The Normal distribution has two parameters, −∞ < µ < ∞ and σ 2 > 0, which are equal to its mean and variance, so its standard deviation is σ, and its pdf is The cdf of the Normal dissemination isn't accessible in shut structure, so calculations frequently utilized tables of the cdf of the Standard Normal dispersion, which has and , utilizing the way that Standard Normally circulated if X is Normally conveyed with parameters µ and σ 2. Such tables are incorporated into statistics course readings, however all primary statistical and numerical software these days have great schedules for the calculation of the Normal dispersion cdf. Its immediate use in reliability is regularly limited to demonstrating remaining or blunder terms in relapse models, yet it assumes a significant job as a model for log-changed lifetimes.

Lognormal distribution

A random quantity is Lognormally distributed if has a Normal distribution, so that has pdf The mean and change of T are exp(¾ + σ 2/2) and separately. This dispersion is very prevalent as a model for lifetimes, despite the fact that its hazard rate has the to some degree ugly property that it increments, from , to a most extreme, and from that point diminishes to 0 for Nonetheless, if consideration is especially to early failure times, yet with a destroy impact, at that point this model may be suitable. One conceivable contention defending the utilization of this model is related to the Central Limit Theorem contention for that point its circulation will be around Normal. Henceforth, a similar property holds for ln T if T has a Lognormal circulation, with the log-change here implying that T, for this contention, can be deciphered as the product of numerous autonomous random quantities, which may be appealing in specific sorts of failure forms. A conspicuous contention for the ubiquity of the Lognormal dispersion was dependably the wide accessibility of statistical tables for the Normal appropriation, a contention that is less applicable these days.

Inverse Gaussian appropriation

In spite of what the name may maybe recommends, this isn't a circulation for a straightforward change of a Normally disseminated random quantity. Nonetheless, it turns out to be always significant in reliability theory because of the way that it is suitable for halting occasions in Brownian motion ('Gaussian') forms, and these (and related) forms are assuming an undeniably significant job in reliability displaying. Assume that a Brownian motion, beginning at 0 at time , has float and fluctuation σ2 , then an opportunity to achieve the esteem a > 0 out of the blue has an Inverse Gaussian circulation with parameters The pdf of this dispersion is

Padgett and Tomlinson present a phenomenal case of the utilization of such procedures to portray degradation, giving a model of continuous cumulative harm. They present a general accelerated test model in which failure times and degradation measures are consolidated for derivation about system lifetime, with the float of the procedure relying upon the speeding up factor. Their paper incorporates an illustrative precedent utilizing degradation information saw in carbon-film resistors.

VI. IMPORTANT CONTINUOUS DISTRIBUTIONS

The following lists some of the important continuous distributions that are used frequently. These are defined through their PDF‘s and used by choice whenever the results of an experiment appear to fit a particular distribution.

Exponential Distribution

The exponential distribution is defined by the following PDF with scale factor. It is graphically represented in Figure 1 The corresponding CDF is It is graphically represented in Figure 2 The mean and variance

Figure 1 Exponential PDF Figure 2 Exponential CDF Weibull Distribution

The Weibull distribution is a two-parameter distribution, described by the following.

The two parameters are , the scale parameter and , the shape parameter. Their effects are illustrated in Figures 3 and 4.

Figure 3 Effect of shape parameter on the Weibull distribution Figure 4 Effect of scale parameter on the Weibull distribution

The mean of the Weibull distribution is given by The variance is given by Where (.) is the Gamma function, as defined for n > 0 The shapes of the associated hazard functions depend on the value of ; • If > 1-h- (t) is an increasing function of t • If 0< -h(t) is a decreasing function of t` The essentialness of the Weibull distribution lies in the way that it very well may be made to fit a wide scope of hazard functions, as managed by understanding or analyses.

Gamma Distribution

Another two-parameter distribution utilized in reliability thinks about is the Gamma distribution. It accepts hugeness by virtue of its utilization in the Weibull distribution. The two parameters are Îą, the shape parameter and, the scale parameter. The PDF is spoken for t >0 by For t It is graphically represented in Figure 5 for different values of Îą. The mean and variance of the distribution is given by

Figure 5 Effect of scale parameter on the Gamma distribution

The corresponding CDF is represented for the case when Îą is an integer. This case is also called the Erlang distribution. The hazard function can display a wide assortment of shapes like those saw on account of the Weibull distribution. In reliability, as in numerous other application regions of stochastic displaying and statistical derivation, one is additionally normally intrigued by discrete random quantities, for example when checking the quantity of segments that have played out an errand effectively. The Binomial distribution is likely the most oftentimes utilized distribution, though additionally the Negative Binomial distribution and the Poisson distribution have the right to be referenced expressly. Obviously, there are again numerous varieties and speculations to these and different distributions for discrete random quantities, for these we allude again to the writing as referenced in the early on section.

Binomial distribution

Assume that X signifies the quantity of triumphs in n free preliminaries, where every preliminary is a triumph with probability θ and a failure with probability 1−θ. The probability distribution of X, for a given estimation of n, is given by

The expected value and fluctuation of X are and individually. Calculation of these probabilities is direct if n isn't unreasonably huge, for huge values of n two approximations can be utilized: for values of x near 0 or n, the Poisson distribution with expected value can be utilized as a sensible guess, while for different values of x an estimation dependent on the Normal distribution with the equivalent expected value and change is appropriate. A standard 'course book' case for a circumstance where the Binomial distribution is proper is n tosses of a perhaps one-sided coin, where the coin lands heads up with probability θ on each toss, and where the quantity of tosses with heads up is checked. In circumstances where the potential results are in excess of two unordered classes, the Multinomial distribution gives a reasonable speculation of the Binomial distribution.

Negative Binomial distribution

The Negative Binomial distribution is a variety to the Binomial distribution, with the complete number of preliminaries not a foreordained consistent, yet rather one checks the quantity of preliminaries until a predetermined number of victories have happened. Assume again that preliminaries are free, each a triumph with probability θ and a failure with probability 1 − θ, and let N mean the quantity of preliminaries required to get the r-th achievement. At that point the probability distribution of N, for fixed r, is given by Some of the time the Negative Binomial distribution is characterized marginally in an unexpected way, in particular as checking the quantity of preliminaries prior to the r-th success. This distribution is significant in its own privilege in probabilistic hazard appraisal, in reliability yet in addition for instance in quality control, however it is additionally progressively utilized in Bayesian statistical surmising, as it happens as the back prescient distribution for Poisson inspecting with a conjugate Gamma prior distribution. The exceptional instance of r = 1 is known as the Geometric distribution, which just checks the quantity of preliminaries until the primary success has happened.

Poisson distribution

A random quantity X has a Poisson distribution with expected value Âľ if its probability distribution is given by This distribution is especially significant in stochastic procedures, where it includes the quantity of events in a given timeframe. For instance, for a Non-Homogeneous Poisson Process with failure rate function Îť(t), the quantity of events in time interim has a Poisson distribution with expected value

VIII. OTHER DISTRIBUTIONS

Numerous other parametric probability distributions assume a significant job in some particular reliability applications, we quickly notice a few models however allude to the writing, for example the sources referenced in Section 1, for more subtleties. Verifiably, the Log-strategic distribution was every now and again utilized in reliability, with a lifetime T having a Log-calculated distribution if has a Logistic distribution with pdf with parameters −∞ < µ < ∞ and σ > 0. This distribution is fundamentally the same as the Normal distribution, yet its survival function is accessible in shut structure and henceforth it makes it simpler to manage right-edited observations, which frequently happen in reliability applications. The Uniform distribution, which just has a steady pdf over a limited interim, is likewise valuable in some reliability applications. For instance, it models the seasons of events in a Homogeneous Poisson

statistics, as a prior distribution which, as is here and there contended, can reflect very limited prior information being accessible. The Beta distribution is likewise famous in Bayesian statistics, as it is a conjugate prior for Binomial testing models. The Gompertz distribution is described by an appealing structure for the hazard rate, to be specific with scale parameter λ > 0 and shape parameter φ > 0. Unmistakably, φ = 1 gives the Exponential distribution, φ > 1 models an expanding hazard rate (and subsequently can be utilized to demonstrate wear-out), however for φ < 1, so diminishing hazard rate ('wear-in'), an issue happens as the relating probability distribution is ill-advised (for example its density does not integrate to one for t > 0). This last perspective could be deciphered as though an extent of the populace considered can't encounter the event of intrigue. We notice this distribution generally because of its clearly appealing hazard rate, and to underline the confusions that can happen as of now with such rather straightforward scientific structures. The Gompertz distribution has been utilized in actuarial mathematics since the mid nineteenth century. When we displayed the Weibull distribution in Section 3, we quickly alluded to an Extreme Value distribution. For the most part, there are a few Extreme Value distributions (3 fundamental functional structures), which happen for the greatest (or least) of n indistinguishably circulated genuine valued random quantities if n → ∞, and which regularly give great approximations to the distribution of this most extreme (or least) if n is huge. These distributions are valuable in an assortment of reliability applications, for instance related to reliability of systems or to auxiliary reliability and over-burdens. In this paper, we have confined consideration regarding univariate random quantities. Obviously, in numerous reliability applications one is keen on multi-variate random quantities for a great introduction of related statistical theory, including helpful parametric distributions for multi-variate information.

IX. CONCLUSION

All in all, we have seen a couple of the various aspects reliability theory brings to the table. We have dismembered various systems and figured out how to compute their structure and reliability functions which drove us into having the option to demonstrate some intriguing theorems with respect to these systems and their segments. complex statistical models, for example, mixture models, Bayesian various levelled models, models with co-variates or parametric models for procedures, we allude to the writing for further discourse of such models, a few precedents are given in different articles in this reference book. Parametric models are absolutely not constantly appropriate, specifically on the off chance that one has numerous information accessible the utilization of non-parametric or semi-parametric methods may be ideal because of their expanded adaptability to adjust to explicit highlights of the information. Because of expanded computational power, uses of non-and semi-parametric methods have turned out to be all the more generally accessible, yet parametric distributions are probably going to stay significant apparatuses in reliability and hazard evaluation. Usually a preferred position if the parameters have a natural interpretation. A few of the parametric probability distributions talked about in this paper, including Normal, Exponential and Gamma (yet shockingly not the Weibull distribution except if its shape parameter is thought to be a known consistent), have a place with the purported Exponential Family of distributions, for which the parameters can be related to a couple of dimensional adequate statistics which condense the information, which has the additional advantage of accessible conjugate priors to streamline calculation in Bayesian statistics. While picking a parametric distribution as a model in a particular application, one regularly needs to search for an appropriate harmony between effortlessness of the model and comparing computational angles, and how adaptable and practical the model is. As referenced previously, for dependability of statistical deductions dependent on an expected parametric model, it is significant that the model decision is very much clarified and, where conceivable, symptomatic methods (for example 'decency of-fit tests') are utilized to check if the model fits well with the accessible information.

REFERNCES:-

1. Jozsef Dombi, Tamas J. & Zsuzsanna Eszter Toth (2016). ―The Epsilon Probability Distribution and its Application in Reliability Theory‖ Vol. 15, No. 1, pp. 197-216 2. József Dombi, Tamás Jónás Zsuzsanna E. Tóth Gábor Árva (2016). ―The omega probability distribution and its applications in reliability theory‖ Quality and Reliability function of wide applicability, Journal of Applied Mechanics, Vol. 18, pp. 293–297 4. Andrea Viti, Alberto Terzi, Luca Bertolaccini (2015). ―A practical overview on probability distributions‖ J Thorac Dis. 2015 Mar; 7(3): pp. E7–E10. doi: 10.3978/j.issn.2072-1439.2015.01.37PMCID: PMC4387424 5. Lawrence M. Leemis, Daniel J. Luckett, Austin G. Powell & Peter E. Vermeer (2012). Univariate Probability Distributions, Journal of Statistics Education, 20:3, DOI: 10.1080/10691898.2012.11889648 6. R.E. Barlow, F. Proschan (1965). "Mathematical theory of reliability", Wiley. 7. Ross, Sheldon M. (2010). Introduction to Probability Models. Academic Press, 2010 8. Padgett, W.J. & Tomlinson, M.A. (2004). Inference from accelerated degradation and failure data based on Gaussian process models. Lifetime Data Analysis, 10, pp. 191-206 9. Birolini A. (1999) Basic Probability Theory. In: Reliability Engineering. Springer, Berlin, Heidelberg

Corresponding Author Devendra Kumar Pandey*

Professor & Director, Unique Institute of Management & Technology, Ghaziabad, India

devkp60@rediffmail.com