Inference on P(X<Y) for Extreme Values

Sudhansu S. Maiti, Sudhir Murmu

  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

Inference on P(X<Y) for Extreme Values

Sudhansu S. Maiti1,, Sudhir Murmu2

1Department of Statistics, Visva-Bharati University Santiniketan, India

2District Rural Development Agency Khunti, Jharkhand, India

Abstract

The article considers the problem of , when X and Y belong to independently distributed two extreme value distributions. Maximum likelihood estimate of R has been found out and the estimates assuming different distributions have been compared for complete samples. Lower confidence limits of R have been found out by Delta method and bootstrap method. The Bayes estimate of R has also been calculated using MCMC approach.

Cite this article:

  • Maiti, Sudhansu S., and Sudhir Murmu. "Inference on P(X<Y) for Extreme Values." American Journal of Applied Mathematics and Statistics 2.3 (2014): 121-128.
  • Maiti, S. S. , & Murmu, S. (2014). Inference on P(X<Y) for Extreme Values. American Journal of Applied Mathematics and Statistics, 2(3), 121-128.
  • Maiti, Sudhansu S., and Sudhir Murmu. "Inference on P(X<Y) for Extreme Values." American Journal of Applied Mathematics and Statistics 2, no. 3 (2014): 121-128.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

Inference of is used in various applications e.g. stress-strength reliability, statistical tolerancing, measuring demand-supply system performance, measuring heritability of a genetic trait, bio-equivalence study etc. It is observed especially in military and medical sciences that the system designers, reliability practitioners and experts in medical field seek to assign high probability to the event that the system/unit remains operable at its minimum strength encountering maximum stress at that time epoch. To meet this objective, it seems reasonable to define with and .

Now the cumulative distribution function of is given by

if are independent and identical, and the cumulative distribution function of Y is given by

if are independent and identical.

Here we assume that and follow independent Weibull distributions with common shape parameter and the probability density functions are given by

and

respectively.

Then

and the probability density function is

and

and the probability density function is

Hence

(1.1)

If k=1 i.e. in case of stress-strength reliability for component only, and its inferential aspects have been studied in McCool (1991) and Mukherjee and Maiti (1998), if also, then . It is just exponential case and the case is studied by a host of authors. If only, then the situation reduces to exponential case of a system with identical components and then

In this article, we have attempted estimation problem of for Weibull family of distributions. We have found out maximum likelihood estimate (MLE) of for complete samples. An emphasis has been given for finding out lower confidence limits (lcls) as this is the one of practical importance-practioners want to assert that the system is at least attained this limit. We use delta method and bootstrap method to find out lcls. We also derive Bayes estimate of using MCMC approach.

The paper is organized as follows. Section 2 is devoted for finding out MLE and lcls of . Bayes estimation of has been discussed in section 3. Simulation results have been discussed in section 4. Data analysis has been presented in Section 5 and section 6 concludes.

2. Inference about R

2.1. Maximum Likelihood Estimation of R

To compute the MLE of , we have to obtain the MLEs of and . Suppose is a random sample from and is a random sample from . Hence, the underlying log-likelihood function is

Then the MLE of is to be obtained from the relation

and that of is from

and the MLE of is to be obtained by solving the equation

An estimate of is to be obtained from replacing and by and respectively.

We have already mentioned that when , it reduces to exponential case. We will concentrate further inference in this situation only. Under such situation, the estimates of and are of the form

respectively.

Let us write

Where

Now, the asymptotic variance-covariance matrix of is given by

Let , with yield the asymptotic variance of as

Here

and

Assuming as a standard normal variate a lower confidence bound to can be constructed.

Remark 2.1 is to be obtained by replacing the parameters by their ML estimators.

2.2. Bootstrap Lower Confidence Limits

In this subsection, we propose to use two lower confidence limits based on the parametric bootstrap methods; (i) percentile bootstrap method (we call it from now on as Boot-p) based on the idea of Efron (1982, 1988), (ii) bootstrap-t method (we refer it as Boot-t from now on) based on the idea of Hall (1988). We illustrate briefly how to estimate lower confidence limits of using both methods.

Boot-p Methods:

Step 1: From the sample and compute and .

Step 2: Using generate a bootstrap sample and similarly using generate a bootstrap sample . Based on and compute the bootstrap estimate of using (1), say

Step 3 : Repeat step 2, NBOOT times.

Step 4: Let be the cumulative distribution function of

Define for a given . The approximate lower confidence limits of is given by .

Bootstrap-t Methods:

Step 1: From the sample and compute and .

Step 2: Using generate a bootstrap sample and similarly using generate a bootstrap sample . Based on and compute the bootstrap estimate of using (1), say and the following statistic:

Compute using Remark 2.1.

Step 3 : Repeat step 2, NBOOT times.

Step 4: From the NBOOT values obtained, determine the lower bound of the confidence limits of as follows: Let be the cumulative distribution function of . For a given define

Here also, can be computed as mentioned in Remark 2.1. The approximate lower confidence limit of is given by .

3. Bayes Estimation of R

In this section, we obtain the Bayes estimation of under assumption that the shape parameters and are random variables. We mainly obtain the Bayes estimate of under the squared error loss by Gibbs sampling technique. It is assumed that and have independent gamma priors with the parameter ~ , ~ and ~ . Based on the above assumptions, we have the likelihood function of the observed data as

Therefore, the joint density of the data, and can be obtained as

where is the prior distribution. Therefore, the joint posterior density of and given data is

Since these equations cannot be obtained analytically, we adopt the Gibbs sampling technique to compute the Bayes estimate of

The posterior pdfs of and are as follows:

and

To generate random numbers from these distributions, we use the Metropolis-Hastings method with appropriate proposal distributions. Therefore, the algorithm of Gibbs sampling is as follows:

Step 1: Start with an initial guess.

Step 2: Set.

Step 3: Using Metropolis-Hastings, generate from with appropriate proposal distribution.

Step 4: Using Metropolis-Hastings, generate from with appropriate proposal distribution.

Step 5: Using Metropolis-Hastings, from with appropriate proposal distribution.

Step 6: Compute from the expression of.

Step 7: Set

Step 8: Repeat step 3-6, times.

Note that in steps 3-5, we use the Metropolis-Hastings algorithm with proposal distribution as follows:

Let .

Generate from proposal distribution

Let

Accept with probability or accept with probability

In case of exponential distributions (i.e.), we have the posterior pdfs of and are as follows:

Now the appropriate posterior mean and posterior variance of become

and

respectively.

4. Simulation and discussion

In this section we present some results based on Monte Carlo simulations to compare the performance of for different values of Also the values of and are mentioned under each table. All computations were performed using R-software and these are available on request from the corresponding author. We consider to draw inference on when the baseline distribution of extended distribution is known. All the results are based on 1000 replications.

We report the average biases and mean squared errors (MSEs) over 1000 replications. We also compute the 95% lower confidence limit (lcl) of based on asymptotic distribution of, using Boot-p and Boot-t methods. The bootstrap lcls are obtained using 1000 bootstrap replications in both cases. All the results are reported in Table 7, Table 8, Table 9.

Some of the points are quite clear from this experiment. The performances of the MLEs are quite satisfactory in terms of biases and MSEs. It is observed that when increases then MSEs decreases for low value of but increase for high value of High value of is underestimated slightly (i.e. bias is negative), where as low value of is overestimated generally. All lower confidence bounds are estimated satisfactorily. Particularly, Boot-t lcls perform very well. Based on all these, we recommend using the parametric percentile bootstrap lcls, particularly Boot-t lcls.

We do not have any prior information on, therefore, we prefer to use the non-informative prior to compute different Bayes estimates. Since the non-informative prior, i.e. provides prior distributions which are not proper, we adopt the suggestion of Congdon (2001, pp 20), i.e. choose which are almost like Jeffrey’s prior, but they are proper. Under the same prior distributions, we compute Bayes estimate of and and have approximate Bayes estimates of under squared error loss function. To generate random observations from the posterior distributions of and we use the Metropolis-Hastings method with proposal distributions and respectively. The algorithms of Gibbs sampling is described in section 3. The burn in sample in each case is taken 5000. The results are reported in Table 10, Table 11, Table 12 with the change of the averages biases and the MSEs do not show clear picture. Therefore, if we do not have prior information about and then using Bayes estimates we may not gain much. Since the MLE is consistent and it can be used for constructing lower confidence limits also, we recommend using MLEs in this case.

5. Simulated Data Analysis

In this section we present the analysis of simulated data. The data set are presented in Table 1, Table 3 and Table 5. The results are summarized in Table 2, Table 4 and Table 6.

Table 1. Simulated Data Set m=15, n=25, λ1=2, λ2=0.5, a=1, k=1

Table 2. Estimates of R and lower Confidence Limits

Table 3. Simulated Data Set m=15, n=25, λ1=2, λ2=0.5, a=1, k=2

Table 4. Estimates of R and lower Confidence Limits

Table 5. Simulated Data Set m=15, n=25, λ1=2, λ2=0.5, a=1, k=3

Table 6. Estimates of R and lower Confidence Limits

The true values of R for the simulated data sets in Table 1, Table 3 and Table 5 are 0.8, 0.5333333 and 0.3324675 respectively (see Table 7, Table 8 and Table 9 for, ). We observe that, in all the cases, the MLE of R is very close to the true value. One should note that in real life data situation, the true value of R is not possible to get and hence comparison of biases and MSEs are not possible. However, in the present scenario, one can get almost the true picture from the simulation results presented in section 4. From Table 7, Table 8 and Table 9 , it is ensured that the MLEs of R has minimum biases and MSEs comparing the values corresponding to , . It is evident from the analysis of data sets and the results presented in Table 2, Table 4 and Table 6 that the MLE of R is fairly good compared to the Bayes estimate – the fact was also reported in simulation study. Some improvements in the case of Bayes estimate of R may be expected if the appropriate prior distributions are selected when it is available besides non-informative prior. In all the data sets, lcl’s in Bootstrap-t are better from maximum coverage probability point of view.

6. Concluding Remark

In this article, we have discussed inference problem of with and . The and distributions have been considered Weibull. We have considered maximum likelihood estimate and Bayes estimate of Comparing these two, we recommend to use MLE for An emphasis has been given on lower confidence limits as this is the one of practical importance-practitioners want to assert that the system is at least attained this limit. To construct lcls, we consider Delta method and two bootstrap methods -percentile (Boot-p) and bootstrap-t (Boot-t). We recommend using the parametric bootstrap lcls, particularly Boot-t lcls.

Table 7. Simulation results of Extreme distribution. m=25, n=25, a=1, k=1

Table 8. Simulation results of Extreme distribution. m=25, n=25, a=1, k=2

Table 9. Simulation results of Extreme distribution. m=25, n=25, a=1, k=3

Table 10. Simulation results on Bayes estimate of R. m=25, n=25, a=1, k=1

Table 11. Simulation results on Bayes estimate of R. m=25, n=25, a=1, k=2

Table 12. Simulation results on Bayes estimate of R. m=25, n=25, a=1, k=3

Acknowledgement

The authors are thankful to the referee for valuable comments, which led to an improvement in the presentation of this paper.

References

[1]  Congdon, P., Bayesian Statistical Modeling, John Wiley, , 2001.
In article      
 
[2]  Efrom, B., The jackknife, the bootstrap and other resampling plans, In CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM, Phiadelphia, PA, 3, 1982.
In article      
 
[3]  Efron, B., Discussion: Theoritical comparison of bootstrap intervals, The Annals of Statistics, 16, 969-972, 1988.
In article      CrossRef
 
[4]  Hall, P. Theoritical comparison of bootstrap confidence intervals, Annals of Statistics, 16, 927-953, 1988.
In article      CrossRef
 
[5]  McCool, I.J., Inference o P(X<Y) in the Weibull case, Commun. Statist.-Simula. and Comp., 20, 129-148, 1991.
In article      CrossRef
 
[6]  Mukherjee, S.P. and Maiti, S.S., Stress-Strength reliability in the Weibull case, Frontiers in reliability, World Scientific, 4, 231-248, 1998.
In article      
 
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn