# R code for constructing likelihood based confidence intervals for the hazard function

The following R code may be used for computing likelihood based confidence intervals for the hazard function of an Accelerated Failure Time model. The code computes the likelihood based confidence intervals for failure times that follow either a Weibull or lognormal distribution.

Definition of the hazard function
The hazard function (or hazard rate) at time t is given by:

$h(t)=\frac{f(t)}{S(t)}$

where f(·) is the probability density function of the failure times, and S(·) the survival function.

Computing likelihood based intervals for the hazard function
For computing the likelihood, a location-scale based parametric distribution will be employed. The location of this distribution will be denoted by μx, and its scale by σx.
A consequence of using a location-scale based distribution is that both f(·) and S(·) in the above equation for the hazard function (denoted by h) will be some function of μx and σx. Furthermore, the likelihood function (denoted by L) also will be some function of μx and σx.

The likelihood function L will be used in computing the likelihood based intervals for the hazard function. However, for computing these confidence intervals we need to reparameterize L. For this reparameterization to work, notice that h is defined as a function of the parameters μx and σx , denoted by h=f(μx, σx). As a consequence, it is also possible to express μx as some function of h and σx , denoted by μx=g(h, σx).

The reparameterization of the likelihood function L requires that μx is substituted by g(h, σx) in the original likelihood function for L. In that way, L becomes a function of h and σx. This latter parameterization of L makes it possible to construct likelihood based confidence intervals for the hazard function.

For our specific reparameterization of L, it is thus needed to express μx as a function of the parameters h and σx. In other words, we need to compute μx given any values for h and σx.
The definition of the hazard function above showed that h=f(μx, σx), and therefore that h – f(μx, σx) = 0. This latter equation may yield our sought for value for μx, given any values for the remaining parameters in the equation (i.e., h and σx).
One way of obtaining this value for μx is by relying on numerical optimization. More specifically, μx can be found for any given values h and σx by minimizing abs(h – f(μx, σx)) over all possible values of μx.

A description of likelihood based confidence intervals can be found in Meeker and Escobar’s 1998 book Statistical methods for reliability data (Chapter 8, pp. 173-186).

See this blog post for computing bootstrap confidence intervals for the hazard function.

See this page for an overview of all of Stefan’s R code blog posts.

R code

library(SPREDA)
library(pracma)

##data

#Shock absorber failure data (see: Meeker & Escobar, Example 3.8, p. 59)

#failure=1, censored=0
ShAbs$Cens <- as.numeric(ShAbs$Censor)-1

#fit Weibull model
modX <- survreg(Surv(Distance,Cens)~1, data=ShAbs, dist="weibull")
summary(modX)

#obtain MLE for mu and sigma
Xmu <- coef(modX)[1]
names(Xmu) <- "Xmu"
Xsigma <- modX$scale names(Xsigma) <- "Xsigma" #log-likelihood (loglikw <- modX$loglik[1])

#construct variables for analysis
Xdata <- ShAbs$Distance Xd <- ShAbs$Cens

##likelihood based interval for the hazard function

#specify time at which to compute the hazard rate and its confidence interval
t <- 20000

#function for computing abs(hazhat-f(Xmu,Xsigma)/(S[Xmu,Xsigma]))
XmuMin1 <- function (XmuPar, Xsigma, hazhat, t){
abs(hazhat-((1/(Xsigma*exp(XmuPar)))*(t/exp(XmuPar))^((1/Xsigma)-1)))}

#log-likelihood function
XmuSE <- sqrt(diag(vcov(modX)))[1] #extract standard error for Xmu

llik.haz <- function (modpar, x, xd, hazhat, t) {
xsigma <- exp(modpar[1])

xmu <- optim(par=Xmu, XmuMin1, method="Brent",
lower=Xmu-20*XmuSE, upper=Xmu+20*XmuSE,
hazhat=hazhat, Xsigma=xsigma, t=t)$par z <- (log(x)-xmu)/xsigma #log-likelihood for exact (not censored) and right-censored observations sum(log((( 1/(xsigma*x) * dsev(z)) ^xd)*( (1-psev(z)) ^(1-xd) ))) } #profile function XsigmaSE <- sqrt(diag(vcov(modX)))[2] #extract standard error for Xsigma profile.haz <- function(x, xd, t, hazhats) { plkhazhat <- rep(0, length(hazhats)) for (i in 1:length(hazhats)) { temp <- optim(c(log(Xsigma)), llik.haz, method='Brent', control=list(fnscale=-1), lower=log(Xsigma)-5*XsigmaSE, upper=log(Xsigma)+5*XsigmaSE, x=x, xd=xd, t=t, hazhat=hazhats[i] ) plkhazhat[i] <- temp$value
}
plkhazhat
}

#MLE for the hazard rate at specified t
(hazhat <- as.vector((1/(Xsigma*exp(Xmu)))*(t/exp(Xmu))^((1/Xsigma)-1)))

#generate fixed values for hazhat
hazhats <- seq(2.5e-5, 1e-4, length.out=100)

#likelihood profile
#note: in case of warning messages, decrease the width of the interval
#for possible values of Xmu in the llik.haz function
#that is, change the bounds to, for instance, Xmu+/-10*XmuSE, or Xmu+/-5*XmuSE
#should this not help, then decrease the width of the interval
#for possible values of Xsigma in the profile.haz function
#that is, change the bounds to, for instance, log(Xsigma)+/-4*XsigmaSE
ll <- profile.haz(hazhats=hazhats, x=Xdata, xd=Xd, t=t)

#plot profile likelihood for hazhat
plot(hazhats,ll, type='l', xlab=paste("hazard rate (at t=",t,")", sep=""), ylab="log-likelihood")

#include lower limit for log-likelihood values
loglikw <- llik.haz(modpar=log(Xsigma),
x=Xdata, xd=Xd, hazhat=hazhat, t=t)
alpha <- .10 #90% confidence interval
limit <- loglikw - .5*qchisq(1-alpha, df=1)
abline(h=limit,col="blue",lty=2)
#include MLE for hazhat
abline(v=hazhat, col="red", lty=2)

#compute limits of confidence interval by hand
limitlkh <- function(xobs, xd, t, hazhats) {
profile.haz(x=xobs, xd, t, hazhats) - limit
}

lowerci <- fzero(limitlkh, c(2.5e-5, hazhat), xobs=Xdata, xd=Xd, t=t)
upperci <- fzero(limitlkh, c(hazhat, 1e-4), xobs=Xdata, xd=Xd, t=t)
c(lowerci$x, upperci$x)

#include confidence bounds in plot
abline(v=lowerci$x, col="darkgreen", lty=2) abline(v=upperci$x, col="darkgreen", lty=2)

##lognormal distribution

#fit lognormal model to data
modX <- survreg(Surv(Distance,Cens)~1, data=ShAbs, dist="lognormal")
summary(modX)

#obtain MLE for mu and sigma
Xmu <- coef(modX)[1]
names(Xmu) <- "Xmu"
Xsigma <- modX$scale names(Xsigma) <- "Xsigma" #log-likelihood (loglikw <- modX$loglik[1])

#construct variables for analysis
Xdata <- ShAbs$Distance Xd <- ShAbs$Cens

##likelihood based interval for the hazard function

#specify time at which to compute the hazard rate and its confidence interval
t <- 20000

#function for computing abs(hazhat-f(Xmu,Xsigma)/(S[Xmu,Xsigma]))
XmuMin1 <- function (XmuPar, Xsigma, hazhat, t){
z <- (log(t)-XmuPar)/Xsigma
fx <- dnorm(z)/(t*Xsigma) #pdf of failure times
Sx <- 1-pnorm(z) #survival function
abs(hazhat-fx/Sx)}

#log-likelihood function
XmuSE <- sqrt(diag(vcov(modX)))[1] #extract standard error for Xmu

llik.haz <- function (modpar, x, xd, hazhat, t) {
xsigma <- exp(modpar[1])

xmu <- optim(par=Xmu, XmuMin1, method="Brent",
lower=Xmu-20*XmuSE, upper=Xmu+20*XmuSE,
hazhat=hazhat, Xsigma=xsigma, t=t)$par z <- (log(x)-xmu)/xsigma #log-likelihood for exact (not censored) and right-censored observations sum(log((( 1/(xsigma*x) * dnorm(z)) ^xd)*( (1-pnorm(z)) ^(1-xd) ))) } #profile function XsigmaSE <- sqrt(diag(vcov(modX)))[2] #extract standard error for Xsigma profile.haz <- function(x, xd, t, hazhats) { plkhazhat <- rep(0, length(hazhats)) for (i in 1:length(hazhats)) { temp <- optim(c(log(Xsigma)), llik.haz, method='Brent', control=list(fnscale=-1), lower=log(Xsigma)-5*XsigmaSE, upper=log(Xsigma)+5*XsigmaSE, x=x, xd=xd, t=t, hazhat=hazhats[i] ) plkhazhat[i] <- temp$value
}
plkhazhat
}

#MLE for the hazard rate at specified t
zt <- (log(t)-Xmu)/Xsigma
ft <- dnorm(zt)/(t*Xsigma) #pdf of failure times
St <- 1-pnorm(zt) #survival function
(hazhat <- as.vector(ft/St))

#generate fixed values for hazhat
hazhats <- seq(2e-5, 1e-4, length.out=100)

#likelihood profile
#note: in case of warning messages, decrease the width of the interval
#for possible values of Xmu in the llik.haz function
#that is, change the bounds to, for instance, Xmu+/-10*XmuSE, or Xmu+/-5*XmuSE
#should this not help, then decrease the width of the interval
#for possible values of Xsigma in the profile.haz function
#that is, change the bounds to, for instance, log(Xsigma)+/-4*XsigmaSE
ll <- profile.haz(hazhats=hazhats, x=Xdata, xd=Xd, t=t)

#plot profile likelihood for hazhat
plot(hazhats,ll, type='l', xlab=paste("hazard rate (at t=",t,")", sep=""), ylab="log-likelihood")

#include lower limit for log-likelihood values
loglikw <- llik.haz(modpar=log(Xsigma),
x=Xdata, xd=Xd, hazhat=hazhat, t=t)
alpha <- .10 #90% confidence interval
limit <- loglikw - .5*qchisq(1-alpha, df=1)
abline(h=limit,col="blue",lty=2)
#include MLE for hazhat
abline(v=hazhat, col="red", lty=2)

#compute limits of confidence interval by hand
limitlkh <- function(xobs, xd, t, hazhats) {
profile.haz(x=xobs, xd, t, hazhats) - limit
}

lowerci <- fzero(limitlkh, c(2e-5, hazhat), xobs=Xdata, xd=Xd, t=t)
upperci <- fzero(limitlkh, c(hazhat, 1e-4), xobs=Xdata, xd=Xd, t=t)
c(lowerci$x, upperci$x)

#include confidence bounds in plot
abline(v=lowerci$x, col="darkgreen", lty=2) abline(v=upperci$x, col="darkgreen", lty=2)