In my view, study design (along with theory) is *the* ultimate preregistration. It always tells the story as it is. We just need some counterfactual thinking. Why is the study designed this way? Is this an appropriate design if this is the desired inference? Is this a good test of the theory?
7. In sum, reproducibility is not an indicator of the state of science. Using it as a target will lead us astray. Failed replications should help raise more questions about a research field, not fewer. That is if it's a genuine area of interest (vs. publication chasing by showing a sellable effect).
I suppose it was inevitable.
Problem is, there is a strong positive correlation between prominence and charlatanism in our field.
@stephensenn.bsky.socialwww.tandfonline.com/doi/abs/10.1...
In recent years a number of authors have promoted approaches to measuring the results of clinical trials that depend on the degree of overlap between the distribution of results from the treatment ...
This re-defining of "there is no evidence for" is much like the term "effect size", which somehow now refers specifically to normalized effect sizes rather than the plain magnitude of the change in an outcome, which has contributed to psychology remaining an only pseudo-cumulative science.
3/4 I wrote about this in detail here with the examples of factor and network models in psychology. I also write how I am often asked to collaborate "to write a network paper" and when I ask researchers what they find out about the world, they have no response. www.tandfonline.com/doi/full/10....
I ramble about this *a lot*, for example here: journals.sagepub.com/doi/full/10....compass.onlinelibrary.wiley.com/doi/10.1111/...pubmed.ncbi.nlm.nih.gov/35925053/.
Causal inference is a central goal of research. However, most psychologists refrain from explicitly addressing causal research questions and avoid drawing causa...