Systemic reviews have been introduced as a more rigorous approach to synthesizing evidence, compared to traditional approaches, but their methods are often poorly applied. Using examples, this article aims to identify major pitfalls in the conduct and reporting of systematic reviews.
Traditional approaches to reviewing literature may be susceptible to bias and result in incorrect decisions. This is of particular concern when reviews address policy- and practice-relevant questions. Systematic reviews have been introduced as a more rigorous approach to synthesizing evidence across studies; they rely on a suite of evidence-based methods aimed at maximizing rigour and minimizing susceptibility to bias.
Despite the increasing popularity of systematic reviews in the environmental field, evidence synthesis methods continue to be poorly applied in practice, resulting in the publication of syntheses that are highly susceptible to bias. Recognizing the constraints that researchers can sometimes feel when attempting to plan, conduct and publish rigorous and comprehensive evidence syntheses, this article aims to identify major pitfalls in the conduct and reporting of systematic reviews, making use of recent examples from across the field.
Adopting a “critical friend” role in supporting would-be systematic reviews and avoiding individual responses to police use of the “systematic review” label, the authors go on to identify methodological solutions to mitigate these pitfalls. They then highlight existing support available to avoid these issues and call on the entire community, including systematic review specialists, to work towards better evidence syntheses for better evidence and better decisions.
Design and development by Soapbox.