Re: [R] maximum likelihood, 1st and 2nd derivative

From: Ben Bolker <>
Date: Thu 11 Jan 2007 - 22:03:40 GMT

francogrex <francogrex <at>> writes:

> [SNIP]
> This maximisation involves a search in five-dimensional
> parameter space {θ: α1,α2, β1, β2, P} for the vector that maximises the
> likelihood as evidenced by the first and second derivatives of the function
> being zero. The likelihood is L(θ) = Πij {P f (Nij; α1, β1, Eij) + (1-P) f
> (Nij; α2, β2, Eij)} This involves millions of calculations. The
> computational procedures required for these calculations are based on the
> Newton-Raphson method. This is an old calculus-based technique that was
> devised to find the roots of an equation (e.g. the values of the independent
> variable (e.g. x) for which the value of the function (e.g. f(x)) equals
> zero."

  I'm sure someone will correct me if I'm wrong, but this seems wrong to me. We only want the first derivatives to be zero. It wouldn't be impossible for second and higher derivatives to be zero, but it would be somewhat pathological.
  While optimization will be faster and more stable if you can compute the first derivatives (the _gradient_) analytically (and R has the D() and deriv() functions for doing so), R will compute derivatives numerically by finite differences if you don't. See ?optim.

  Blatant plug: pp. 3-4 might be helpful too.

  Ben Bolker mailing list PLEASE do read the posting guide and provide commented, minimal, self-contained, reproducible code. Received on Fri Jan 12 09:10:59 2007

Archive maintained by Robert King, hosted by the discipline of statistics at the University of Newcastle, Australia.
Archive generated by hypermail 2.1.8, at Thu 11 Jan 2007 - 23:30:27 GMT.

Mailing list information is available at Please read the posting guide before posting to the list.