Re: [R] storing the estimates from lmer

From: Douglas Bates <>
Date: Sun 16 Jul 2006 - 04:39:43 EST

Hi Spencer,

On 7/15/06, Spencer Graves <> wrote:
> p.s. I intended to include the following extension to an example from
> the 'lmer' help page:
> fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
> (fm1.fix <- fixef(fm1))
> ( <- sqrt(diag(vcov(fm1))))
> fm1.fix/
> fm1.ran <- VarCorr(fm1)
> diag(fm1.ran$Subject)

I'm confident that you are aware that sqrt(diag(vcov(fm1))) and sqrt(diag(fm1.ran$Subject)) refer to different parameters the model. However, some readers of your reply may not see the distinction. Allow me to try to clarify.

The summary for this fitted model is

lmer> (fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)) Linear mixed-effects model fit by REML
Formula: Reaction ~ Days + (Days | Subject)

	  Data: sleepstudy
      AIC      BIC    logLik MLdeviance REMLdeviance
 1753.628 1769.593 -871.8141   1751.986     1743.628
Random effects:
 Groups   Name        Variance Std.Dev. Corr
 Subject  (Intercept) 612.090  24.7405
          Days         35.072   5.9221  0.066
 Residual             654.941  25.5918

number of obs: 180, groups: Subject, 18

Fixed effects:

            Estimate Std. Error t value
(Intercept) 251.4051     6.8246  36.838
Days         10.4673     1.5458   6.771

Correlation of Fixed Effects:

Days -0.138

The fixed effects described in the lower part of this summary are a familiar type of parameter in a statistical model. They are coefficients in a linear predictor. We produce estimates of these parameters and also provide a measure of the precision of these estimates - their standard errors. The vcov generic function returns an estimate of the precision of the estimated parameters (typically parameters that are coefficients in a linear predictor). Thus

> sqrt(diag(vcov(fm1)))
[1] 6.824558 1.545789

provides the standard errors of the fixed effects estimates.

The random effects, although they also appear in the linear predictor, are not formally parameters in the model. They are unobserved random variables whose variance covariance matrix has a known form but unknown value. It is the values that determine the variance-covariance matrix that are parameters in the model. These parameters are returned by VarCorr

> VarCorr(fm1)

2 x 2 Matrix of class "dpoMatrix"

            (Intercept) Days
(Intercept) 612.09032 9.60428
Days 9.60428 35.07165

[1] 25.59182

In other words, the 2 x 2 matrix shown above is the estimate of the variance-covariance matrix of the random effects associated with the grouping factor "Subject".


> sqrt(diag(VarCorr(fm1)$Subject))

[1] 24.740459 5.922132

gives the estimated standard deviations of the random effects. These are estimates of parameters in the model. They are *not* standard errors of parameter estimates. The lmer function and related software does not return standard errors of the estimates of variance components. This is intentional. Below I give my familiar rant on why I think returning such standard errors is a bad practice. I encourage users of lmer who wish to determine the precision of the estimates of the variance components to create a Markov chain Monte Carlo sample of the parameters and evaluate the HPDintervals.

> sm1 <- mcmcsamp(fm1, 50000)
> library(coda)

Warning message:
use of NULL environment is deprecated
> HPDinterval(sm1)

                          lower        upper
(Intercept)         236.6518363  266.5465536
Days                  7.0136243   13.8947993
log(sigma^2)          6.2550082    6.7295329
log(Sbjc.(In))        5.4928205    7.5751372
log(Sbjc.Days)        2.8197523    4.6337518
atanh(Sbj.(I).Dys)   -0.6988632    0.9836688
deviance           1752.5158501 1766.6461469
[1] 0.95

Some software, notably SAS PROC MIXED, does produce standard errors for the estimates of variances and covariances of random effects. In my opinion this is more harmful than helpful. The only use I can imagine for such standard errors is to form confidence intervals or to evaluate a z-statistic or something like that to be used in a hypothesis test. However, those uses require that the distribution of the parameter estimate be symmetric, or at least approximately symmetric, and we know that the distribution of the estimate of a variance component is more like a scaled chi-squared distribution which is anything but symmetric. It is misleading to attempt to summarize our information about a variance component by giving only the estimate and a standard error of the estimate. </rant>
> ########################
> The structure of 'lmer' objects and helper functions is outlined in
> the 'lmer' and 'lmer-class' help pages. The latter mentions "'vcov
> 'signature(object = "mer")': Calculate variance-covariance matrix of the
> _fixed_ effect terms, see also 'vcov'." Thus,
> sqrt(diag(vcov(lmer.object))) should give you standard errors of the
> fixed effects.
> The parameter estimates can be obtained using 'VarCorr'. However,
> characterizing their random variability is harder, because their
> distribution is not as simply summarized. The 'lmer-class' help page
> says that an 'lmer' object includes a slot, "'Omega': A list of
> positive-definite matrices stored as '"dpoMatrix"' objects that are the
> relative precision matrices of the random effects associated with each
> of the grouping factors." However, I don't know how to use this. For
> problems like this, the 'lme4' and 'Matrix' packages include a function
> 'simulate', which is what I might use, at least until I figured out how
> to get essentially the same answer from the Omega slot.
> Hope this helps,
> Spencer Graves
> prabhu bhaga wrote:
> > Dear all,
> >
> > I'm trying to store/extract the mean& standard error of the fixed effects
> > parameter and the variance of the random effects parameter from "lmer"
> > procedure from mlmre4 package developed by bates n pinheiro. while storing
> > fixed effects parameter is straight forward, the same is not true for
> > storing the variance parameter of the random effects. kindly help me
> >
> > ~prabhu
> >
> > [[alternative HTML version deleted]]
> >
> > ______________________________________________
> > mailing list
> >
> > PLEASE do read the posting guide!
> ______________________________________________
> mailing list
> PLEASE do read the posting guide!
> mailing list PLEASE do read the posting guide! Received on Sun Jul 16 04:47:13 2006

Archive maintained by Robert King, hosted by the discipline of statistics at the University of Newcastle, Australia.
Archive generated by hypermail 2.1.8, at Tue 18 Jul 2006 - 04:16:59 EST.

Mailing list information is available at Please read the posting guide before posting to the list.