From: michael watson (IAH-C) <michael.watson_at_bbsrc.ac.uk>

Date: Thu 21 Apr 2005 - 00:18:56 EST

anova(m.B, m.full) ## test for A effect anova(m.A, m.full) ## test for B effect

Notice: This e-mail message, together with any attachments,...{{dropped}}

R-help@stat.math.ethz.ch mailing list

https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Received on Thu Apr 21 00:25:51 2005

Date: Thu 21 Apr 2005 - 00:18:56 EST

Thanks for the response. Answers to your questions in turn:

My null hypothesis is that these is no difference between the treatment means. I guess that makes my alternative there is a difference.

I understand all about interactions, and yes, there's an interaction term in my model. Moreover, it is a pretty easy to understand and interpret interaction. In this example case, yes the interaction term is significant, and so I know I can and should only interpret this term and not any of the lower order terms.

However, I will be repeating this analysis for other response variables, some of which inevitably will not have a significant interaction term. What then? I guess one answer would be to say that as it's not significant, I could remove it from the model and perform some model comparisons as you suggest?

Doug agrees with the guy who taught me stats, and I should only be looking at the type I sequential sums of squares. I also like that as it comes out of R. It's just minitab freaked me out.

I guess I could use drop1() to get from the type I to the type III in R...

> From: michael watson (IAH-C)

*>
**> Hi
**>
**> I am performing an analysis of variance with two factors,
**> each with two
**> levels. I have differing numbers of observations in each of the four
**> combinations, but all four combinations *are* present (2 of the factor
**> combinations have 3 observations, 1 has 4 and 1 has 5)
**>
**> I have used both anova(aov(...)) and anova(lm(...)) in R and
**> it gave the
**> same result - as expected. I then plugged this into minitab,
**> performed
**> what minitab called a General Linear Model (I have to use this in
**> minitab as I have an unbalanced data set) and got a different result.
**> After a little mining this is because minitab, by default,
**> uses the type
**> III adjusted SS. Sure enough, if I changed minitab to use the type I
**> sequential SS, I get exactly the same results as aov() and
**> lm() in R.
**>
**> So which should I use? Type I adjusted SS or Type III sequential SS?
**> Minitab help tells me that I would "usually" want to use type III
**> adjusted SS, as type I sequential "sums of squares can differ when
**> your design is unbalanced" - which mine is. The R functions I am
**> using are clearly using the type I sequential SS.
*

Here we go again... The `type I vs. type III SS' controversy has long been debated here and elsewhere. I'll give my personal bias, and leave you to dig deeper if you care to.

The `types' of sum of squares are creation of SAS. Each type corresponds to different hypothesis being considered. The short answer to your question would be: `What are your null and alternative hypotheses'?

One of the problems with categorizing like that is it tends to keep people from thinking about the question above, and thus leading to the confusion of which to use.

The school of thought I was broght up in says you need (and should) not think that way. Rather, frame your question in terms of model comparisons. This approach avoids the notorious problem of comparing the full model to ones that contain interaction, but lack one main effect that is involved in that interaction.

More practically: Do you have interaction in your model? If so, the
result for the interaction term should be the same in either `type' of
test. If that interaction term is significant, you should find other
ways to understand the effects, and

_not_ test for significance of the main effects in the presence of
interaction. If there is no interaction term, you can
assess effects by model comparisons such as:

m.full <- lm(y ~ A + B) m.A <- lm(y ~ A) m.B <- lm(y ~ B)

anova(m.B, m.full) ## test for A effect anova(m.A, m.full) ## test for B effect

**HTH,
**

Andy

> Any help would be very much appreciated!

*>
**> Thanks
**> Mick
**>
**> ______________________________________________
**> R-help@stat.math.ethz.ch mailing list
**> https://stat.ethz.ch/mailman/listinfo/r-help
**> PLEASE do read the posting guide!
**> http://www.R-project.org/posting-guide.html
**>
**>
**>
*

Notice: This e-mail message, together with any attachments,...{{dropped}}

R-help@stat.math.ethz.ch mailing list

https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Received on Thu Apr 21 00:25:51 2005

*
This archive was generated by hypermail 2.1.8
: Fri 03 Mar 2006 - 03:31:17 EST
*