BLUE
DE
David Epstein
@davidnotdave.bsky.social
Literature major, then neuroscience Ph.D., then addiction researcher. Enough-knowledge-to-endanger-myself in social sciences and statistics. He/him. Views my own.
110 followers118 following20 posts
DEdavidnotdave.bsky.social

I kinda startled myself with this little experiment in power calculations. You can have zero ability to find the effect you're looking for, but more than zero "power," all of which reflects your likelihood of overestimating the effect.

If your sample size is 50, then the smallest correlation that will be statistically significant (at .05, two-tailed) is .28.

An observed r of .27 in that sample cannot be statistically significant (at .05, two-tailed).

And yet…

A power analysis would tell you that you have 48% power to detect that correlation with n = 50.

The power analysis actually means you have a 48% probability of getting a p value <.05 with n = 50. When that happens, it can only mean that the correlation in your sample is spuriously large: bigger than .27.

It's impossible, with n=50, to detect the correlation at its true size (r=.27) and have it be statistically significant.

Presumably, this problem is less extreme at conventional power levels (80% or more), but the underlying fact seems to remain.
2

SBstephenbenning.bsky.social

A statistical significance test of a parameter whose value is right on the line of significance will have power = .50. Because parameters all have sampling distributions when calculated in samples, you're right that you'd be half as likely to find a statistic below the parameter's value as above it.

1
AWaidangcw.bsky.social

I guess not if you take confidence interval lower bounds seriously?

1
DE
David Epstein
@davidnotdave.bsky.social
Literature major, then neuroscience Ph.D., then addiction researcher. Enough-knowledge-to-endanger-myself in social sciences and statistics. He/him. Views my own.
110 followers118 following20 posts