[R] cross-validation complex model AUC Nagelkerke R squared code

From: Jürgen Biedermann <juergen.biedermann_at_googlemail.com>
Date: Tue, 12 Apr 2011 12:01:00 +0200

Hi there,

I really tried hard to understand and find my own solution, but now I think I have to ask for your help.
I already developed some script code for my problem but I doubt that it is correct.

I have the following problem:

Image you develop a logistic regression model with a binary outcome Y
(0/1) with possible preditors (X1,X2,X3......). The development of the
final model would be quite complex and undertake several steps (stepwise forward selection with LR-Test statistics, incorporating interaction effects etc.). The final prediction at the end however would be through a glm object (called fit.glm). Then, I think so, it would be no problem to calculate a Nagelkerke R squared measure and an AUC value (for example with the pROC package) following the script:

BaseRate <- table(Data$Y[[1]])/sum(table(Data$Y)) L(0)=Likelihood(Null-Model)=
LIKM <- predict(fit.glm, type="response") L(M)=Likelihood(FittedModell)=sum(Data$Y*log(LIKM)+(1-Data$Y)*log(1-LIKM))

R2 = 1-(L(0)/L(M))^2/n

AUC <- auc(Data$Y,LIKM)

I checked this kind of caculation of R2_Nagelkerke and AUC-Value with the built-in calculation in package "Design" and got consistent results.

Now I implement a cross validation procedure, dividing the sample randomly into k-subsamples with equal size. Afterwards I calculate the predicted probabilities for each k-th subsample with a model
(fit.glm_s) developed taking the same algorithm as for the whole data
model (stepwise forward selection selection etc.) but using all but the k-th subsample. I store the predicted probabilities subsequently and build up my LIKM vector (see above) the following way.

LIKM[sub] <- predict(fit.glm_s, data=Data[-sub,], type="response").

Now I use the same formula/script as above, the only change therefore consists in the calculation of the LIKM vector.

BaseRate <- table(Data$Y[[1]])/sum(table(Data$Y)) L(0)=Likelihood(Null-Model)=
...calculation of the cross-validated LIKM, see above ... L(M)=Likelihood(FittedModell)=sum(Data$Y*log(LIKM)+(1-Data$Y)*log(1-LIKM))

R2 = 1-(L(0)/L(M))^2/n

AUC <- auc(Data$Y,LIKM)

When I compare my results (using more simply developed models) with the validate method in package "Design" (method="cross",B=10), it seems to me that I consistently underestimate the true expected Nagelkerke R Squared. Additionally, I'm very unsure about the way I try to calculate a cross-validated AUC.

Do I have an error in my thoughts of how to obtain easily cross-validated AUC and R Squared for a model developed to predict a binary outcome?

I hope my problem is understandable and you could try to help me.

Best regards,

Jürgen Biedermann
Bergmannstraße 3
10961 Berlin-Kreuzberg
Mobil: +49 176 247 54 354
Home: +49 30 250 11 713
e-mail: juergen.biedermann_at_gmail.com

R-help_at_r-project.org mailing list
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Received on Tue 12 Apr 2011 - 11:31:53 GMT

Archive maintained by Robert King, hosted by the discipline of statistics at the University of Newcastle, Australia.
Archive generated by hypermail 2.2.0, at Tue 12 Apr 2011 - 12:10:29 GMT.

Mailing list information is available at https://stat.ethz.ch/mailman/listinfo/r-help. Please read the posting guide before posting to the list.

list of date sections of archive