From: wu sz <r.shengzhe_at_gmail.com>

Date: Sat 23 Jul 2005 - 09:30:27 EST

R-help@stat.math.ethz.ch mailing list

https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Received on Sat Jul 23 09:36:16 2005

Date: Sat 23 Jul 2005 - 09:30:27 EST

Hello,

I have a data set with 15 variables (first one is the response) and 1200 observations. Now I use pls package to do the plsr with cross validation as below.

trainSet = as.data.frame(scale(trainSet, center = T, scale = T)) trainSet.plsr = mvr(formula, ncomp = 14, data = trainSet, method = "kernelpls",

CV = TRUE, validation = "LOO", model = TRUE, x = TRUE, y = TRUE)

after that I wish to obtain the value of "se", estimated standard errors of the estimates(cross validation), mentioned in the function of MSEP, but not implemented yet, so I made the program by myself to calculate it. The results I got seem not right, and I wonder which step below is wrong.

y = trainSet.plsr$y

p = as.data.frame(trainSet.plsr$validation$pred)

i = 1; msep_element = matrix(nrow = 1200, ncol = 14)
while(i <= length(p)){

msep_element[,i] = (p[i]-y)^2

i = i + 1

}

msep = colMeans(msep_element)

msep_sd = sd(msep_element)

Then I compare "msep" with "trainSet.plsr$validation$MSEP", which are the same, but the values of "msep_sd" seem much larger than I expected, is it the same as "se"? If not, how to calculate "se" of cross validation?

Thank you,

Shengzhe

R-help@stat.math.ethz.ch mailing list

https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Received on Sat Jul 23 09:36:16 2005

*
This archive was generated by hypermail 2.1.8
: Fri 03 Mar 2006 - 03:33:58 EST
*