Skip to content

Latest commit

 

History

History
9 lines (5 loc) · 2.39 KB

pandoc7b2c7598fc4.md

File metadata and controls

9 lines (5 loc) · 2.39 KB

Which method should i use for my meta-analysis?

While recent research suggests that the conventional small-study effect methods may have substantial limitations, and that p-Curve may be able to estimate the true effect with less bias [@simonsohn2014p;@simonsohn2015better;@simonsohn2014pb], please note that both methods are based on different theoretical assumptions about the origin of publication bias. As we cannot ultimately decide which assumption is the "true" one in specific research fields, and, in practice the true effect is unkown when doing meta-analysis, we argue that you may use both methods and compare results as sensitivity analyses [@harrer2019internet].

P-curve was developed with full-blown experimental psychological research in mind, in which researchers often have high degrees of "researcher freedom" [@simmons2011false] in deleting outliers and performing statistical test on their data.

We argue that this looks slightly different for clinical psychology and the medical field, where researchers conduct randomized controlled trials whith a clear primary outcome: the difference between the control and the intervention group after the treatment. While it is also true for medicine and clinical psychology that statistical significance plays an important role, the effect size of an intervention is often of greater interest, as treatments are often compared in terms of their treatment effects in this field. Furthermore, best practice for randomized controlled trials is to perform intention-to-treat analyses, in which all collected data in a trial has to be considered, giving researchers less space to "play around" with their data and perform p-hacking. While we certainly do not want to insinuate that outcome research in clinical psychology is free from p-hacking and bad data analysis practices, this should be seen as a caveat that the assumptions of the small-study effects methods may be more adequate for clinical psychology than other fields within psychology, especially when *the risk of bias for each study is also taken into account. Facing this uncertainty, we think that conducting both analyses and reporting them in our research paper may be the most adequate approach until meta-scientific research gives us more certainty about which assumption actually best reflects the field of clinical psychology.