# R code for constructing likelihood based confidence intervals when summing two random variables

The following R-code may be used for computing likelihood based confidence intervals when adding two random variables.

A practical example
A device is known to fail due to material cracks. The failure time for this device is obtained by summing the onset time (X) and growth time (Y) of the cracks. Both X and Y are random variables and are assumed to follow a Weibull distribution.

What is the probability of failure when X+Y is less than, say, 314 seconds? This probability (denoted by Fhat) is computed using the following (convolution) integral: $F_{hat} =\int^{+\infty}_{-\infty} F_x(314-y)f_y(y) \ dy$

where Fx(·) is the Weibull cumulative density function (cdf) of X (=onset time), and fy(·) is the Weibull probability density function (pdf) of Y (=growth time).

Note that both the onset times (X) and growth times (Y) never take negative values. Since both these random variables X and Y are non-negative, the limits of the above integral will change. That is, the lower limit changes from ∞ to 0, since fy will be 0 for negative values (i.e., for y<0). Similarly, the upper limit changes from ∞ to 314 since Fx(314-y) will be 0 for negative values (i.e., for y>314). Accordingly, the above integral is given by: $F_{hat} =\int^{314}_{0} F_x(314-y)f_y(y) \ dy$

Computing likelihood based intervals for Fhat
For computing the likelihood, a location-scale parameterization for the Weibull distribution will be employed. The location of the Weibull distribution for random variable X will be denoted by μx, and its scale by σx. In addition, μy and σy are, respectively, the location and scale of random variable Y.
A consequence of using a location-scale based distribution is that Fx(·) in the above convolution integral becomes a function of μx and σx. Similarly, fy(·) will be some function of μy and σy.
Furthermore, the likelihood function for X (denoted by Lx) can now be defined in terms of μx and σx, and the likelihood function for Y (denoted by Ly) in terms of μy and σy.
Finally, under the assumption that random variables X and Y are independent, the likelihood function for the sum of X and Y is given by Lsum=Lx(μx, σx)*Ly(μy, σy).

The likelihood function Lsum will be used in computing the likelihood based intervals for Fhat. However, for computing these Fhat confidence intervals we need to reparameterize Lsum. For this reparameterization to work, remember that the convolution integral defined Fhat as some function of the parameters μx, σx, μy, and σy, denoted by Fhat=fx, σx, μy, σy). As a consequence, it is also possible to express μx as some function of Fhat, σx, μy, and σy, denoted by μx=g(Fhat, σx, μy, σy).

The reparameterization of Lsum requires that μx is substituted by g(Fhat, σx, μy, σy) in the original likelihood function for Lsum. In that way, Lsum becomes a function of Fhat, σx, μy, and σy. This latter parameterization of Lsum makes it possible to construct likelihood based confidence intervals for Fhat.

For our specific reparameterization of Lsum, it is thus needed to express μx as a function of the parameters Fhat, σx, μy, and σy. In other words, we need to compute μx given any values for Fhat, σx, μy, and σy.
The convolution integral showed that Fhat=fx, σx, μy, σy), and therefore that Fhat – fx, σx, μy, σy) = 0. This latter equation may yield our sought for value for μx, given any values for the remaining parameters in the equation.
One way of obtaining this value for μx is by relying on numerical optimization. More specifically, μx can be found for any given values Fhat, σx, μy, and σy by minimizing abs(Fhat – fx, σx, μy, σy)) over all possible values of μx.

Previous results
Previously, a company analyzing the failure times for the above device used Matlab for computing the likelihood based intervals for Fhat. For the device’s data, Matlab’s estimates for the location and scale of X (=onset time) and Y (=growth time) were:
μx=6.8, σx=0.5
μy=6.0, σy=0.2

Using Matlab, the estimates for likelihood based interval for Fhat (in case X+Y<314) were:
Two-sided 90% confidence interval: (.000005, .0196)
One-sided 90% confidence interval: .0112 (=upper bound of one-sided confidence interval)

These Matlab results will be used as benchmarks for the R-code results below. The analysis in this blog is the result of a collaboration between Stefan Gelissen (Datall, the Netherlands) and Yontha Ath (Aerospace Corporation, USA).

This blog post demonstrates how to compute likelihood based confidence intervals for quantiles when summing two random variables.

See this page for an overview of all of Stefan’s R code blog posts.

R code

library(SPREDA)
library(pracma)

##data

#X-data (crack onset times in seconds)
#lower limit of time interval
XL <- c(112,276,1,323,80,368,618,627,673,824,833,72.5)
#upper limit of time interval
XU <- c(328,563,704,528,NA,NA,NA,NA,NA,NA,NA,NA)
#failure=1, censored=0
XStatus <- c(rep(1,4), rep(0,8))
#weights
XCount <- c(rep(1,11),75)
#construct data frame
Xdata <- data.frame(XL,XU,XStatus,XCount)

#Y-data (crack growth times in seconds)
Ydata <- c(279,431) #crack growth times
Yd <- c(1,1) #failure=1, censored=0

##fit distributions to data
modX <- survreg(Surv(XL, XU, type="interval2") ~ 1,
weights=XCount, data=Xdata,
dist="weibull")
modY <- survreg(Surv(Ydata, Yd)~1, dist="weibull")

#obtain MLE for mu and sigma
#note: these MLEs are similar to those obtained with Matlab
Xmu <- coef(modX)
names(Xmu) <- "Xmu"
Xsigma <- modX$scale names(Xsigma) <- "Xsigma" c(Xmu, Xsigma) Ymu <- coef(modY) names(Ymu) <- "Ymu" Ysigma <- modY$scale
names(Ysigma) <- "Ysigma"
c(Ymu, Ysigma)

##likelihood based interval for F(t) when summing 2 random variables X and Y

#specify time
t <- 314

#specify integrand for numerical integration
integrand <- function(u, Xmu, Ymu, Xsigma, Ysigma) {
#pdf for Y (where Y follows a Weibull distribution)
f1 <- dsev((log(u)-Ymu)/Ysigma) / (Ysigma*u)
#cdf for X (where X follows a Weibull distribution)
f2 <- psev((log(t-u)-Xmu)/Xsigma)
f1*f2
}

#function for computing abs(Fhat-f(Xmu,Xsigma,Ymu,Ysigma))
XmuMin1 <- function (XmuPar, Xsigma, Ymu, Ysigma, Fhat, t){
abs(Fhat-integrate(integrand, 0, t, Xmu=XmuPar, Ymu=Ymu,
Xsigma=Xsigma, Ysigma=Ysigma)$value)} #log-likelihood function XmuSE <- sqrt(diag(vcov(modX))) #extract standard error for Xmu llik.sum <- function (modpar, xlower, xupper, xd, wtx=rep(1,length(xlower)), y, yd, wty=rep(1,length(y)), Fhat, t) { xsigma <- exp(modpar) ysigma <- exp(modpar) ymu <- modpar xmu <- optim(par=Xmu, XmuMin1, method="Brent", lower=Xmu-20*XmuSE, upper=Xmu+20*XmuSE, Fhat=Fhat, Xsigma=xsigma, Ymu=ymu, Ysigma=ysigma, t=t)$par

zy <- (log(y)-ymu)/ysigma
zxLower <- (log(xlower)-xmu)/xsigma
zxUpper <- (log(xupper)-xmu)/xsigma

#assuming that the random variables X and Y are independent,
#the log-likelihood function is given by:
sum(wty*log((( 1/(ysigma*y) * dsev(zy)) ^yd)*( (1-psev(zy)) ^(1-yd) ))) +
sum(wtx*log( ((psev(zxUpper)-psev(zxLower))^xd)*( (1-psev(zxLower)) ^(1-xd) ) ))
}

#profile function
profile.sum <- function(xlower, xupper, xd, wtx=rep(1,length(xlower)),
y, yd, wty=rep(1,length(y)), t, Fhats) {
plkfhat <- rep(0, length(Fhats))

for (i in 1:length(Fhats)) {
temp <- optim(c(log(Xsigma), log(Ysigma), Ymu), llik.sum,
xlower=xlower, xupper=xupper, xd=xd, wtx=wtx,
y=y, yd=yd, wty=wty, t=t, Fhat=Fhats[i] )
plkfhat[i] <- temp$value } plkfhat } #MLE for Fhat (Fhat <- integrate(integrand, 0, t, Xmu=Xmu, Ymu=Ymu, Xsigma=Xsigma, Ysigma=Ysigma)$value)

#generate fixed values for Fhat
Fhats <- seq(0.000001, 0.025, length.out=100)

#likelihood profile
#note: in case of warning messages, decrease the width of the interval
#for possible values of Xmu in the llik.sum function
#that is, change the bounds to, for instance, Xmu+/-10*XmuSE
ll <- profile.sum(Fhats=Fhats, xlower=XL, xupper=XU , xd=XStatus, wtx=XCount,
y=Ydata, yd=Yd, t=t)

#plot profile likelihood for Fhat
plot(Fhats,ll, type='l', xlab=paste("F(",t,")", sep=""), ylab="log-likelihood")

#include lower limit for log-likelihood values
loglikw <- llik.sum(modpar=c(log(Xsigma), log(Ysigma), Ymu),
xlower=XL, xupper=XU, xd=XStatus, wtx=XCount,
y=Ydata, yd=Yd, Fhat=Fhat, t=t)
alpha <- .10 #90% confidence interval
limit <- loglikw - .5*qchisq(1-alpha, df=1)
abline(h=limit,col="blue",lty=2)
#include MLE for Fhat
abline(v=Fhat, col="red", lty=2)

#compute limits of confidence interval by hand
limitlkh <- function(xlower, xupper, xd, y, yd, t, Fhats,
wtx=rep(1,length(xlower)), wty=rep(1,length(y))) {
profile.sum(xlower, xupper, xd, wtx, y, yd, wty, t, Fhats) - limit
}

#upper and lower limits of two-sided confidence interval
lowerci <- fzero(limitlkh, c(1e-6,Fhat), xlower=XL, xupper=XU, xd=XStatus, wtx=XCount, y=Ydata, yd=Yd, t=t)
upperci <- fzero(limitlkh, c(Fhat, .025), xlower=XL, xupper=XU, xd=XStatus, wtx=XCount, y=Ydata, yd=Yd, t=t)
round(c(lowerci$x, upperci$x), 6)
#note: these limits are similar to those obtained with Matlab, which were (.000005,.0196)

#include confidence bounds in plot
abline(v=lowerci$x, col="darkgreen", lty=2) abline(v=upperci$x, col="darkgreen", lty=2)

#upper bound of one-sided interval
alpha <- .10 #90% confidence interval
limit <- loglikw - .5*qchisq(1-2*alpha, df=1)

upperci <- fzero(limitlkh, c(Fhat, .025), xlower=XL, xupper=XU, xd=XStatus, wtx=XCount, y=Ydata, yd=Yd, t=t)
upperci$x #note: this limit is similar to that obtained with Matlab, which was .0112 In the following example the X and Y random variables are assumed to follow lognormal distributions. Furthermore, in this example the failure times for X are exact (instead of the interval censored failure times for X in the example above). library(survival) library(pracma) ##some artificial data Xdata <- c(94, 191, 339, 371, 391, 421, 428, 462, 478, 502, 503, 510, 512, 529, 531, 546, 585, 611, 618, 636, 704, 705, 712, 718, 776, 781, 782, 788, 792, 805, 808, 836, 883, 886, 899, 904, 915, 1002, 1009, 1017, 1061, 1132, 1148, 1222, 1300, 1316, 1446, 1493, 1504, 1533) Xd <- rep(1, length(Xdata)) #failure=1, censored=0 Ydata <- c(178, 222, 240, 247, 251, 265, 271, 279, 287, 294, 308, 313, 320, 324, 338, 347, 348, 351, 352, 354, 356, 357, 357, 358, 394, 407, 410, 415, 418, 462) Yd <- rep(1, length(Ydata)) #failure=1, censored=0 ##fit distributions to data modX <- survreg(Surv(Xdata, Xd)~1, dist="lognormal") modY <- survreg(Surv(Ydata, Yd)~1, dist="lognormal") #obtain MLE for mu and sigma Xmu <- coef(modX) names(Xmu) <- "Xmu" Xsigma <- modX$scale
names(Xsigma) <- "Xsigma"

Ymu <- coef(modY)
names(Ymu) <- "Ymu"
Ysigma <- modY$scale names(Ysigma) <- "Ysigma" ##likelihood based interval for F(t) when summing 2 random variables X and Y #specify time t <- 314 #specify integrand for numerical integration integrand <- function(u, Xmu, Ymu, Xsigma, Ysigma) { #pdf for Y (where Y follows a lognormal distribution) f1 <- dnorm((log(u)-Ymu)/Ysigma) / (Ysigma*u) #cdf for X (where X follows a lognormal distribution) f2 <- pnorm((log(t-u)-Xmu)/Xsigma) f1*f2 } #function for computing abs(Fhat-f(Xmu,Xsigma,Ymu,Ysigma)) XmuMin1 <- function (XmuPar, Xsigma, Ymu, Ysigma, Fhat, t){ abs(Fhat-integrate(integrand, 0, t, Xmu=XmuPar, Ymu=Ymu, Xsigma=Xsigma, Ysigma=Ysigma)$value)}

#log-likelihood function
XmuSE <- sqrt(diag(vcov(modX))) #extract standard error for Xmu

llik.sum <- function (modpar, x, xd, wtx=rep(1,length(x)),
y, yd, wty=rep(1,length(y)), Fhat, t) {
xsigma <- exp(modpar)
ysigma <- exp(modpar)
ymu <- modpar
xmu <- optim(par=Xmu, XmuMin1, method="Brent",
lower=Xmu-20*XmuSE, upper=Xmu+20*XmuSE,
Fhat=Fhat, Xsigma=xsigma, Ymu=ymu, Ysigma=ysigma, t=t)$par z1 <- (log(y)-ymu)/ysigma z2 <- (log(x)-xmu)/xsigma #assuming that the random variables X and Y are independent, #the log-likelihood function is given by: sum(wty*log((( 1/(ysigma*y) * dnorm(z1)) ^yd)*( (1-pnorm(z1)) ^(1-yd) ))) + sum(wtx*log((( 1/(xsigma*x) * dnorm(z2)) ^xd)*( (1-pnorm(z2)) ^(1-xd) ))) } #profile function profile.sum <- function(x, xd, wtx=rep(1,length(x)), y, yd, wty=rep(1,length(y)),t, Fhats) { plkfhat <- rep(0, length(Fhats)) for (i in 1:length(Fhats)) { temp <- optim(c(log(Xsigma), log(Ysigma), Ymu), llik.sum, method='Nelder-Mead', control=list(fnscale=-1), x=x, xd=xd, wtx=wtx, y=y, yd=yd, wty=wty, t=t, Fhat=Fhats[i] ) plkfhat[i] <- temp$value
}
plkfhat
}

#MLE for Fhat
(Fhat <- integrate(integrand, 0, t, Xmu=Xmu, Ymu=Ymu,
Xsigma=Xsigma, Ysigma=Ysigma)$value) #generate fixed values for Fhat Fhats <- seq(0.0000001, 0.0002, length.out=100) #likelihood profile #note: in case of warning messages, decrease the width of the interval #for possible values of Xmu in the llik.sum function #that is, change the bounds to, for instance, Xmu+/-10*XmuSE ll <- profile.sum(Fhats=Fhats, x=Xdata, xd=Xd, y=Ydata, yd=Yd, t=t) #plot profile likelihood for Fhat plot(Fhats,ll, type='l', xlab=paste("F(",t,")", sep=""), ylab="log-likelihood") #include lower limit for log-likelihood values loglikw <- llik.sum(modpar=c(log(Xsigma), log(Ysigma), Ymu), x=Xdata, xd=Xd, y=Ydata, yd=Yd, Fhat=Fhat, t=t) alpha <- .10 #90% confidence interval limit <- loglikw - .5*qchisq(1-alpha, df=1) abline(h=limit,col="blue",lty=2) #include MLE for Fhat abline(v=Fhat, col="red", lty=2) #compute limits of confidence interval by hand limitlkh <- function(xobs, xd, yobs, yd, t, Fhats, wtx=rep(1,length(xobs)), wty=rep(1,length(yobs))) { profile.sum(x=xobs, xd, wtx, y=yobs, yd, wty, t, Fhats) - limit } #upper and lower limits of two-sided confidence interval lowerci <- fzero(limitlkh, c(1e-7, Fhat), xobs=Xdata, xd=Xd, yobs=Ydata, yd=Yd, t=t) upperci <- fzero(limitlkh, c(Fhat, 2e-4), xobs=Xdata, xd=Xd, yobs=Ydata, yd=Yd, t=t) c(lowerci$x, upperci$x) #include confidence bounds in plot abline(v=lowerci$x, col="darkgreen", lty=2)
abline(v=upperci\$x, col="darkgreen", lty=2)