Tagging some people who might be interested in this: @malte.the100.ci@ianhussey.bsky.social@ruben.the100.ci@tomhardwicke.bsky.social@bethclarke.bsky.social@aurelienallard.bsky.social@eikofried.bsky.social@debruine.bsky.social
4/ w @annaveer.bsky.social@ze-freeman.bsky.social@informusiccs.bsky.social@anamartinovici.bsky.social Don van Ravenzwaaij, Sajedeh Rasti
3/ We also wrote a Transparency statement for this output, and added roles to #CredITTaxonomycredit.niso.org). If you have thoughts about this statement or ideas about other things we could have mentioned, let us know!
2/ Brilliant minds gave perspectives about coordinating scientific quality control in practice 1) for promotions & getting tenure, 2) at journals pre- and post-publication, and 3) in error detection projects. How do you think our community can organize to repair and prevent scientific errors?
That's horrible Fardid. So sorry to hear about your situation, and skncerely hope that Janas family will be able to evaluate.
All good points. The replication value formula we propose makes some pretty extreme assumptions. Does that render the formula useless? Is there a way to improve it that does not compromise the feasibility of carrying out computations? I am not sure! But I welcome commentaries on all these issues :)
All thoughts, criticisms, or suggestions for improvements on our proposed strategy are most welcome! There is currently not much discussion of which studies to prioritize for replication (given resource constraints). Hopefully, this special issue can help change that!
The target paper is written by me, @annaveer.bsky.social@lakens.bsky.socialosf.io/preprints/me... In it, we propose to use a combination of citation count and sample size to identify which claims in a field are the most important to replicate.
Touché! But yes, guilty as charged. Amazing initiative btw. Could you DM me your contact liat so I can refer to them as well? 😇
Aha, so that's where they're coming from ;)