What causes loss of power in hypothesis testing?

Several days ago, I asked the above question in StackExchange. And the following is the list of answers:

There is an enormous literature on this subject; I’ll just give you a quick thumbnail sketch. Let’s say you’re testing groups’ mean differences, as in T-tests. Power will be reduced if…

  1. …variables are measured unreliably. This will in essence “fuzz over” the differences.
  2. …variability within groups is high. This will render between-group differences less noteworthy.
  3. …your criterion for statistical significance is strict, e.g., .001 as opposed to the more common .05.
  4. …you are using a one-tailed test, hypothesizing that if there is a difference, a certain group will have the higher mean. This cuts down on opportunistically significant findings that occur with two-tailed tests.
  5. …you are dealing with (or expecting) very slim mean differences in the first place. Small effect sizes decrease power.

If you google “power calculator” you’ll find some nifty sites that demonstrate the way these factors interact, depending on values you input. And if you download the free Gpower3 program you’ll have a nice tool for calculating power and considering the effects of different conditions–for a wide variety of statistical procedures.

Another one is:

Think of power as your ability to recognize the truthfulness of one of two competing data generating processes. It will obviously be easier when:

  1. The data have much signal with little noise (good SNR).
  2. The competing processes are very different one from the other (null and alternative are separated).
  3. You have ample amounts of data.
  4. You are not too concerned with mistakes (large type I error rate).

If the above do not hold, it will be harder to choose the “true” process. I.e., you will lose power.