From: Ben Bolker <bolker_at_zoo.ufl.edu>

Date: Sat 03 Jul 2004 - 23:42:37 EST

R-devel@stat.math.ethz.ch mailing list

https://www.stat.math.ethz.ch/mailman/listinfo/r-devel Received on Sat Jul 03 23:46:42 2004

Date: Sat 03 Jul 2004 - 23:42:37 EST

Quick question:

poking around and comparing the performance of do_optimhess

(C code within optim.c) and fdHess (in the nlme package), it looks

like do_optimhess evaluates an n-parameter function (2*n)^2 times,
while fdHess evaluates it (n+1)*(n+2)/2 times, to find a
numeric estimate of the Hessian -- and only (n^2+1) of do_optimhess's
evaluations are for unique values of the parameters.
Is there something I'm missing?

(My guess is that it's the result of evaluating the Hessian by comparing

evaluations of the gradient, which might overlap in the
numerical values they compute -- but which would make sense
if you were computing the gradient directly rather than by
finite differencing.)

The reason I ask is that I'm working with some objective
functions that are moderately slow, and the Hessian calculation
at the end can take nearly as much time/as many function
evaluations as finding the optimum in the first place

(64 vs 79 function evaluations in one recent example)

I can hack my own optim() that substitutes fdHess for do_optimhess at the end if no gradient function is defined, but just wanted to know if it would be a bad idea ... (?fdHess cites a reason for its design, but I don't see any explanations lurking in the do_optimhess code ...)

[this is in R 1.9.0, but I don't see anything in the NEWS file for 1.9.1 that indicates anything has changed ...]

thanks,

Ben Bolker

R-devel@stat.math.ethz.ch mailing list

https://www.stat.math.ethz.ch/mailman/listinfo/r-devel Received on Sat Jul 03 23:46:42 2004

*
This archive was generated by hypermail 2.1.8
: Wed 03 Nov 2004 - 22:45:01 EST
*