Re: [R] difference between nls and nlme approaches

From: Prof Brian Ripley <ripley_at_stats.ox.ac.uk>
Date: Fri 23 Jul 2004 - 23:33:37 EST


Your first approach is quite common, and used to be very common. The difficulty can be to get the individual nls fits to fit, and the advantage of the combined fitting is that `borrowing strength' happens.

I don't understand at present why you want to weight the results. You say you are interested in the family-level variance -- given there are around 30 trees per family you can (and I would) just treat the summary statistics of the fit for each tree as the data and apply a one-level nested analysis of variance. How accurately a parameter is determined for a single tree seems not to be relevant to the question you are asking.

On Fri, 23 Jul 2004, Stefano Leonardi wrote:

> Hallo,
> I have a question that is more statistic related than
> about nlme and R functioning.
>
> I want to fit a complicated nonlinear model with nlme with several
> different measures of transpiration taken on each of the 220 trees
> grouped in 8 families. The unknown parameters of the model are three +
> their variances (and covariances). I want to estimate the variances
> among families of the parameters. This would give an idea of the
> genetic control of the biological processes modeled.
>
> The question than is:
> what would be the conceptual differences between
> 1) fitting the model with nls on each tree separately and than look at
> the variance among families of the parameters with lme
> and
> 2) carry out one single fitting with nlme using Family/Tree
> hierarchical grouping (Tree and Families are considered random factors)
> and estimate the within and among families variance of the parameters.

This also estimates within and between trees, though.

> I understand from the Pinheiro and Bates book that there is a degree of
> freedom issue, but what are the real advantages and disadvantages of the
> two approaches?

The standard issues about the use of random vs fixed effects. With as large groups as you have, there should not be much difference.

> This issues is prompted by the fact that it is easy for me to
> have results using the first approach but there is no way that
> the second approach reaches some kind of convergence: it takes hours and
> the it stops without reaching satisfactory results. I have tried several
> simplifications of the model with no particular success.
>
> Right now I am using the first approach weighting the parameter with the
> inverse of their error estimated with nls. I am using
> the weights option with lme using something like:
> weight=varFixed(~ 1/vector_of_errors_of_parameter) approach.
> Am I out of my mind completely? or am I on a more or less right
> direction?
>
> Thank you very much to whoever tries to answer this question.
>
> Stefano
>

-- 
Brian D. Ripley,                  ripley@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford,             Tel:  +44 1865 272861 (self)
1 South Parks Road,                     +44 1865 272866 (PA)
Oxford OX1 3TG, UK                Fax:  +44 1865 272595

______________________________________________
R-help@stat.math.ethz.ch mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Received on Fri Jul 23 23:43:20 2004

This archive was generated by hypermail 2.1.8 : Wed 03 Nov 2004 - 22:55:12 EST