From: Liaw, Andy <andy_liaw_at_merck.com>

Date: Thu 21 Apr 2005 - 00:04:57 EST

anova(m.B, m.full) ## test for A effect anova(m.A, m.full) ## test for B effect

R-help@stat.math.ethz.ch mailing list

https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Received on Thu Apr 21 00:10:17 2005

Date: Thu 21 Apr 2005 - 00:04:57 EST

*> From: michael watson (IAH-C)
**>
**> Hi
**>
*

> I am performing an analysis of variance with two factors,

*> each with two
**> levels. I have differing numbers of observations in each of the four
**> combinations, but all four combinations *are* present (2 of the factor
**> combinations have 3 observations, 1 has 4 and 1 has 5)
**>
**> I have used both anova(aov(...)) and anova(lm(...)) in R and
**> it gave the
**> same result - as expected. I then plugged this into minitab,
**> performed
**> what minitab called a General Linear Model (I have to use this in
**> minitab as I have an unbalanced data set) and got a different result.
**> After a little mining this is because minitab, by default,
**> uses the type
**> III adjusted SS. Sure enough, if I changed minitab to use the type I
**> sequential SS, I get exactly the same results as aov() and
**> lm() in R.
**>
**> So which should I use? Type I adjusted SS or Type III sequential SS?
**> Minitab help tells me that I would "usually" want to use type III
**> adjusted SS, as type I sequential "sums of squares can
**> differ when your
**> design is unbalanced" - which mine is. The R functions I am using are
**> clearly using the type I sequential SS.
*

Here we go again... The `type I vs. type III SS' controversy has long been debated here and elsewhere. I'll give my personal bias, and leave you to dig deeper if you care to.

The `types' of sum of squares are creation of SAS. Each type corresponds to different hypothesis being considered. The short answer to your question would be: `What are your null and alternative hypotheses'?

One of the problems with categorizing like that is it tends to keep people from thinking about the question above, and thus leading to the confusion of which to use.

The school of thought I was broght up in says you need (and should) not think that way. Rather, frame your question in terms of model comparisons. This approach avoids the notorious problem of comparing the full model to ones that contain interaction, but lack one main effect that is involved in that interaction.

More practically: Do you have interaction in your model? If so, the result for the interaction term should be the same in either `type' of test. If that interaction term is significant, you should find other ways to understand the effects, and _not_ test for significance of the main effects in the presence of interaction. If there is no interaction term, you can assess effects by model comparisons such as:

m.full <- lm(y ~ A + B) m.A <- lm(y ~ A) m.B <- lm(y ~ B)

anova(m.B, m.full) ## test for A effect anova(m.A, m.full) ## test for B effect

**HTH,
**

Andy

> Any help would be very much appreciated!

*>
**> Thanks
**> Mick
**>
**> ______________________________________________
**> R-help@stat.math.ethz.ch mailing list
**> https://stat.ethz.ch/mailman/listinfo/r-help
**> PLEASE do read the posting guide!
**> http://www.R-project.org/posting-guide.html
**>
**>
**>
*

R-help@stat.math.ethz.ch mailing list

https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Received on Thu Apr 21 00:10:17 2005

*
This archive was generated by hypermail 2.1.8
: Fri 03 Mar 2006 - 03:31:17 EST
*