And the sad story on how the commentary could not be #openaccesswww.francescosantini.com/wp/2024/03/1...
Of course, also read the great paper I am commenting on: Cocozza, S., Palma, G. Of editorial processes, AI models, and medical literature: the Magnetic Resonance Audiometry experiment. rdcu.be/dBbpE
😍 Excited about the #LoveData24www.eventbrite.co.uk/e/open-data-...zhbluzern.ch/love-data-week for more events
But the good news is: we already have the tools to change this. We should acknowledge that method-development research is hypothesis-driven research. #Preregistration#RegisteredReports should become the norm. Then, we will become more confident, rigorous, and effective.
This is everything we stand against. We’ve been so immersed in this culture that “method research is different” that we thought that the reproducibility crisis did not apply. As fish do not see the surrounding water, we don’t see the publication bias because it’s all around us.
Time and money were wasted, human experiments were performed, and no trace remains. Future researchers might repeat the same mistakes. In the worst case, the researcher is tempted to get "creative" with the data.
In my field, this testing often involves human experiments. If one does the work and then the quality is not achieved, what happens? In the best case, the work is shoved under the rug and never published. Journal guidelines make it explicitly impossible to publish it.
“Achieving its intended purpose” generally means achieving some minimum score on a quality metric, for example, precision or accuracy. However, one cannot know whether the proposed method will achieve the desired quality or not before doing proper testing.
Even PLoS ONE, a journal that takes open science very seriously, says that “submissions…must demonstrate that the new tool achieves its intended purpose [and that it] is an improvement over existing options.” The fact that it sounds so reasonable is actually what makes it scary.