Multiple comparisons correction

Theory

When performing multiple comparisons, the probability of making at least one Type I error is multiplied by your number of comparisons.

Example

If your is 0.05, and you perform five tests, the probability of one of them being a Type I error is 0.25. An of 0.05 implies that in 100 tests, 5 of them will be false positives.

A significant F-Test (or equivalent non-parametric statistical test) tells us that at least one population mean differs significantly from others. So if you have four groups (six pairwise comparisons), then you have controlled for three of the six.

Tests


References