## R code for computing variable importance for a survival model

The following R code computes the relative importance for predictor variables in a survival model. The implemented method for computing the relative importance was inspired by the Leo Breiman’s method for computing variable importance in a Random Forest.

Breiman’s method for computing variable importance can best be explained with an example.
Suppose you have 5 predictor variables, say x1 to x5. These 5 variables are used for predicting some observed response y. However, the 5 variables do not predict exactly the observed values for y. In other words, the predictions based on the 5 variables will more or less deviate from the observed values for y. The Mean Squared Error (MSE) is calculated as the mean of these deviations.
Assume now that predictor x1 has no predictive value for the response y. Hence, if we would randomly permute the observed values for x1, then our predictions for y would hardly change. As a consequence, the MSE before and after permuting the observed values for x1 would be similar.
On the other hand, assume that x3 is strongly related to our response y. If we randomly permute the observed values for x3, then our MSE before and after this permutation would deviate considerably.
Based on these random permutations and our observed change in MSE, we may conclude that predictor variable x3 is more important than x1 in predicting y.

The R code below applies Breiman’s permutation method for computing the relative importance of predictor variables to a survival model. However, instead of MSE the code employs concordance as a measure for the prediction accuracy.
Furthermore, the R code also compares in 2 simulations the performance of Breiman’s method applied to survival models with that of Breiman’s method implemented in a random survival forest.

## R code for computing variable importance for a neural network

The following R code computes the relative importance of input variables in a neural network. The implemented method for computing the relative importance was inspired by the Leo Breiman’s method for computing variable importance in a Random Forest.

Breiman’s method for computing variable importance can best be explained with an example.
Suppose you have 5 predictor variables, say x1 to x5. These 5 variables are used for predicting some observed response y. However, the 5 variables do not predict exactly the observed values for y. In other words, the predictions based on the 5 variables will more or less deviate from the observed values for y. The Mean Squared Error (MSE) is calculated as the mean of these deviations.
Assume now that predictor x1 has no predictive value for the response y. Hence, if we would randomly permute the observed values for x1, then our predictions for y would hardly change. As a consequence, the MSE before and after permuting the observed values for x1 would be similar.
On the other hand, assume that x3 is strongly related to our response y. If we randomly permute the observed values for x3, then our MSE before and after this permutation would deviate considerably.
Based on these random permutations and our observed change in MSE, we may conclude that predictor variable x3 is more important than x1 in predicting y.

The R code below applies Breiman’s permutation method for computing the relative importance of input variables in a neural network.
Furthermore, the R code also compares in 3 simulations the performance of this method for neural networks with those of:

• Olden’s method for computing variable importance in a neural network
• Breiman’s permutation method for computing variable importance in a random forest

The simulations in the R code are similar to the simulation described by Olden, Joy, and Death in their paper An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Note that Olden and colleagues refer to Olden’s method for computing variable importance as the Connection Weights method.