Ref:https://onlinecourses.science.psu.edu/stat464/print/book/export/html/8
In front of the one or two samples are examined, now consider the case of k samples, our hypothesis is:
- Analysis of Variance (ANOVA)
Assumptions is:
- Groups is independent
- Distributions is normally distributed
- Groups have equal variances
Then our hypothesis is:
h0: Μ1=μ2= μ3
H1: atleast One not Equal
R uses the ANOVA function, which can be found in the previous code. (Calculates Simpson index in the Alpha Diversity index)
Simpsonbox = read.csv ("simpsonindex1.csv") Group = Factor (C (Rep (1,21), Rep (22,21), Rep (43,20)), labels = c ("A", "B", "C")) Simpsondata = data.frame (Simpsonindex = simpsonbox$x, group = group) # nonparametric test, check if variance is the same fligner.test (shannonindex ~ Group, DA Ta = shannondata) # Normal distribution data, check if variance is the same bartlett.test (simpsonindex ~ Group, data = simpsondata) # Anovasimpsonaov <-AoV (sim Psonindex ~ Group, data = simpsondata) Summary (SIMPSONAOV)
- The non-parametric method is the Kruskal test
Kruskal.test
After the ANOVA analysis above, if we reject the original hypothesis and know that the mean values of several groups are different, what is the significance of the differences between the 22 groups?
Just like the TUKEYHSD (which requires a normal distribution of data ), we can also do
The Bonferroni adjustment simply divides the Type I error rate (. ") by the number of tests (in this case, three).
Pairwise.t.test (simpsondata$simpsonindex,simpsondata$group,p.adjust= "Bonferroni")
We can compare the difference between it and TUKEYHSD in the results:
In fact, the results are consistent, which shows that the difference between Group C and group A is significant. Bonferroni is generally considered more conservative. The functions we use can be referenced by:
Https://stat.ethz.ch/R-manual/R-devel/library/stats/html/pairwise.t.test.html
This article points out that Holm adjustment is better than Bonferroni and also uses the Pairwise.t.test function.
Ref:http://rtutorialseries.blogspot.jp/2011/03/r-tutorial-series-anova-pairwise.html
- The Fisher Least significant Difference (LSD) method essentially does not correct for the Type I error rate
For multiple comparisons and are generally not recommended relative to other options.
Library (Agricolae) lsd.test ()
Applied Nonparametric STATISTICS-LEC6