In multiple testing problems the notions of power and protected inference must be redefined to incorporate multiplicity. The first distinction between single hypothesis testing and multiple testing is the fact that some null hypotheses are true and some are false. We operate under a Bayesian model in which we posit a prior probability, r.1, that the alternative hypothesis is true. Suppose that in your experiment, the alternative hypothesis is true for a random number, \(M\), of the \(m\) tests. Suppose we apply a given procedure and reject the null hypothesis for a random number of tests, \(R\), and that of these, the alternative hypothesis is true for \(T\) and the null hypothesis is true for \(V=R-T\). The proportion of null hypotheses rejected for which the null hypothesis is true, \(\mathrm{FDP}=V/R\), is called the false discovery proportion. Its expected value, \(\mathrm{FDR}=\mathrm{E}[\mathrm{FDP}]\), is called the false discovery rate. The proportion of test statistics for which the alternative hypothesis is true for which the null hypothesis is rejected, \(\mathrm{TPP}=T/M\), is called the true positive proportion.

Arguments

Plots showing how dispersion in the FDP and TPP increase from negligible for large \(m\) to very worrisome for moderate \(m\) and how the presence of correlated tests worsens this problem.

Much more is possible with the R package, pwrFDR