And an even narrower version of that hypothesis-testing is the NHST mindset which then cannot imagine any other ways by which scientists have been testing their hypotheses for centuries.
I think there's a simple explanation for that. It's not that people don't know about the issues elsewhere, but the Open Science (TM) has been disproportionately concerned about NHST, p-values, and error control and have largely focused on making those work.
#statstab#nhst#anova#ttest#errorcontrol#typeI#pvalueswww.degruyter.com/document/doi...
Multiple contrast tests can be used to test arbitrary linear hypotheses by providing local and global test decisions as well as simultaneous confidence intervals. The ANOVA- F -test on the contrary ca...
Similarly, the assumption that the key concern is inflated alpha from p-hacking, publication bias and QRPs is by no means the most salient concern across science. This assumes inferential goals and approach (NHST, minimize false positives), and ignores challenges other scientists face.
The grant application: "NHST is whack yo" The paper: "check out these p-values!!!"
technically, under neyman-pearson NHST that is correct, as the value itself is not interpretable beyond over/under decision criterion.
Best moment in the video: "I'm not gonna define p-values today, forgive me." Maybe James should. Not that there aren't any good definitions (e.g. check www.ncbi.nlm.nih.gov/pmc/articles...#NHST#pvalue#Statistics
Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of t...