RE: GLMs: show me the model!

From: Sama Low Choy <s.lowchoy_at_qut.edu.au>
Date: Fri, 20 Feb 2009 10:49:10 +1000

Patrick,

This debate has been very illuminating. Thank you Patrick for posing the question.

Bill's seemingly obvious comments are quite powerful - on writing down the model so that the mean and variance are embedded within the key model statement about the assumed distribution, including the (inverse) "link" to covariates. Within a Bayesian context, it is perhaps even more important to be aware of this way of "writing" down the model, as implied by Murray's reference to Gelman & Hill's book.

Thought I would just add a few of comments/queries that may be of interest from a "pedagogical" standpoint (blame John Marriott for explaining this word to us recently!)

ERRORS AND LEAST SQUARES
1. The way of writing down the Normal model with additive errors seems to have been motivated by early least squares (including partial LS, weighted LS) approaches to estimation. To do LS, you need the errors explictly e_i = Y_i - X_i*beta, so that you can minimize the LS criterion on the errors min sum_i e_i^2.

When undertaking estimation based on the likelihood (eg maximum likelihood), the distributional way of writing down the model becames more useful. This includes Bayesian approaches where the likelihood is required - either for a computational approach (eg in WinBUGS where GLMs can be specified simply by specifying the likelihood, priors, and link to covariates) or for an analytic approach where the marginal posterior distributions of parameters are written down explicitly.

ERRORS AND PARAMETERIZATION
2. The parameterization of the model does not always lead to clear separation of errors (ie to provide an additive or multiplicative expression), let alone permit separation of the mean from the variance. I encourage you to investigate alternative parameterisations of GLMs or other non-Gaussian models where the covariates are related to the distribution of the response (eg Beta regression).

For example, a Beta regression may be parameterized so that the covariates are linked to the mean or even perhaps the mode; the remaining parameter (related to *both* the variance and the mean) can be expressed as the variance, the "effective sample size" or as correlation.

MODEL FORMULATION FOR ESTIMATION
3. More generally, the "way of writing down the model" can be very closely linked to the method of estimation; this seems to be true for a range of statistical models beyond GLMs. For example, consider the suite of clustering techniques (see nice exposition in Chambers, Tibshirani and Friedman's Statistical Learning text). In most cases the "model" is written down in a form that is suited to the usual method of estimation. See Christian Robert's recent efforts to re-express the Nearest Neighbour clustering technique to a particular form of Finite Mixture Models.

Another example is classification trees. To explain, the Recursive Partitioning Algorithm implemented in RPART (an R library by Atkinson & Therneau 1997) the model is written down in a way that is linked to the estimation method. However for a Bayesian estimation approach, the full likelihood model is required - see O'Leary et al (2008; Journal of Applied Statistics, 3(1)) with citation of seminal references Chipman, George & McCulloch (1998) and Denison, Mallick & Smith (1998).

ESTIMATION VS MODEL
4. When working with ecologists, it has often been confusing to me that sometimes the model is quoted via the estimation method. One example is the terminology used when referring to alternatives to logistic regression when modelling habitat. See Elith et al (2006; Ecography, 29(2): 129-151). Here the "maximum entropy" approach is mentioned which seems to refer to moment-matching on the mean and variance by applying the ubiquitous entropy model from statistical physics.

Regards,

Dr Samantha Low Choy
Research Fellow
(1) ARC Discovery Grant: Bayesian Priors
(2) Epidemiological study into the long-term respiratory health effects of ultrafine air pollutants from traffic emissions on schoolchildren.

School of Mathematical Sciences
Queensland University of Technology
Rm 0311, O Block, Gardens Point Campus
2 George St, Brisbane Q 4001
Ph: (07) 3138 8314
Fax: (07) 3138 2310
Email: s.lowchoy_at_qut.edu.au

________________________________________
From: owner-anzstat_at_lists.uq.edu.au [owner-anzstat_at_lists.uq.edu.au] On Behalf Of Patrick Cordue [patrick.cordue_at_isl-solutions.co.nz]
Sent: Friday, 20 February 2009 8:59 AM
To: anzstat_at_lists.uq.edu.au
Subject: GLMs: show me the model!

I asked a question on GLMs a couple of days ago. In essence I was asking
"what is the model - please write it down - you, know, like for a linear
model: Y = a + bx + e, where e ~N(0,s^2) - can't we do that for a GLM?"

I come from a modelling background where the first step is to "write down
the model"; the second step is to look for tools which will provide
estimates of the unknown parameters; (I am assuming we already have a data
set). If my model is a GLM, then I can just use glm() in R. So, I wanted to
know the form of the GLM models for different families and link functions.
In particular, which implied simple additive errors (Y = mu + e) and which
implied simple multiplicative errors (Y = mu * e)?
(where mu = E(Y))

The answer provided by Murray Jorgensen is correct:

"In glms there is no simple characterisation of how the
systematic and random parts of the model combine to give you the data
(other than the definition of the glm, of course)."

Clearly for discrete distributions, it makes no sense to look for a
"building block" error e which can be added/multiplied to/by the expectation
to provide the response variable. My question was aimed at continuous
distributions.

Murray Smith (from NIWA) provided some useful comments (see below), which, I
think, get to the heart of my question.

However, I deduced the following results from first principles:

For the Gaussian family, Y = mu + e where e ~ N(0, s^2) (and E(Y) = mu =
m(eta) where eta is the linear combination of the explanatory/stimulus
variables, and m^-1 is the link function) is a GLM. I take this to imply
that when one fits a model using glm() with a Gaussian family and any link,
that the implied error structure is additive.

For the Gamma family, Y = mu * e where e ~ Gamma(k, 1/k) is a GLM. I take
this to imply that when one fits a model using glm() with a Gamma family and
any link, that the implied error structure is multiplicative.

For the inverse Gaussian family the implied model does not have a simple
additive or multiplicative error structure (someone might know how to write
down the model in this case - but not me).

Thanks to everyone who provided comments and references.

--------------------------------------

Murray H. Smith wrote:

"In most GLMs the error is neither multiplicative nor additive. Parameterize
the 1-parameter error family by the mean (fixing any dispersion or shape
parameters, which is what pure GLM is with the added constraint that the
error distribution belongs to a 1-parameter exponential family).

We can only write
       y ~ mu + e or y ~ mu*e
for e not depending on mu, if mu is a location or scale parameter for the
error family. I.e.
      y ~ f( y;mu) where f(y;mu) = f(y - mu; mu =0)
or
      y ~ f( y;mu) where f(y;mu) =1/mu* f(y/mu; mu =1)

The variance function V(mu), the variance expressed as a function of the
mean, must be constant for an additive error and proportional to mu^2 for
multiplicative."

--
-----
Patrick Cordue
Director
Innovative Solutions Ltd
www.isl-solutions.co.nz
----
FOR INFORMATION ABOUT "ANZSTAT", INCLUDING UNSUBSCRIBING, PLEASE VISIT http://www.maths.uq.edu.au/anzstat/
----
FOR INFORMATION ABOUT "ANZSTAT", INCLUDING UNSUBSCRIBING, PLEASE VISIT http://www.maths.uq.edu.au/anzstat/
Received on Fri Feb 20 2009 - 10:50:20 EST

This archive was generated by hypermail 2.2.0 : Thu Feb 26 2009 - 11:40:40 EST