Reliable synthesis of the various rapidly expanding bodies of evidence is vital for the process of evidence-informed decision-making in environmental policy, practice and research [1,2,3,4]. Methods for systematic evidence syntheses (including systematic reviews and maps) are becoming an industry standard for cataloguing, collating and synthesising documented evidence [5]. Systematic reviews and maps are conducted through transparent and repeatable processes, maximising objectivity and attempting to minimise bias throughout the review [6]. Systematic review methods were translated from the field of healthcare to conservation and environmental management in 2006 as a part of emerging ‘evidence-based conservation’ movement [7,8,9,10,11,12]. Systematic reviews are frequently used to assess the effectiveness of management interventions or the effect of an anthropogenic action or natural impact [7, 9]. More recently these methods have been used to answer broader questions that deal with complex systems, for example investigating how, and under which conditions, an intervention or an action may have the greatest effect.
In order to increase the value of reviews for policy and practice and to ensure that they comply with established standards and procedures, formal review coordinating bodies have been established across various disciplines, including Cochrane in healthcare, the Campbell Collaboration in social welfare, and the Collaboration for Environmental Evidence (CEE) in conservation and environmental management. The collaborations provide guidance, training, and endorse reviews through their registration and publication [6, 13, 14]. Where in other fields protocols might be published without peer-review (on e.g. protocol repository platforms), registration and peer-review of a CEE protocol is required and is done through the formal CEE editorial process. Endorsed reviews are vetted by methodology experts and can therefore be trusted as more rigorous and thus more reliable. Nevertheless, substandard reviews remain more numerous (see [15, 16]) with flaws in planning and design (e.g. protocol either missing or lacks crucial details), conduct (e.g. non-comprehensive search) and/or reporting (e.g. poor clarity or comprehensiveness in the write-up) [17, 18]. Without transparent reporting, even well-designed reviews will fail to show their methodological strengths, undermining their utility in decision-making contexts [17].
Systematic review methodology was first established in medicine in the 1990 s to support well-informed decision-making for the health sector, initially focusing on synthesising quantitative evidence from randomised control trials [13]. Since then, systematic review methodology has spread across a range of fields, including software engineering, education, social welfare and international development, public and environmental health, and crime and justice [19,20,21,22], broadening not only the scope of topics but also the methodologies applied. Now, for example, it is standard practice to incorporate observational studies and qualitative research in systematic reviews.
With the rise of evidence-based medicine and increasing numbers of published systematic reviews, criteria for assessing the quality of reporting have been developed. In 1999, as a response to growing evidence of a lack of clarity in reporting of reviews in medicine, an international group of scientists developed a reporting guidance for meta-analyses of randomised trials—the QUOROM (QUality Of Reporting Of Meta-analysis) [23]. To follow methodological changes and conceptual advances, a decade later the QUOROM statement was updated and extended to a new tool that set minimum standards for transparent and complete reporting of systematic reviews and meta-analyses. These updated standards are known as PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) and consists of a 27-item checklist and an easy-to-follow flow-diagram template to demonstrate the stages at which evidence is excluded during the conduct of a systematic review [24]. The PRISMA Statement is accompanied by the PRISMA Explanation and Elaboration document [25]. PRISMA is relevant for reporting of systematic reviews that evaluate randomised trials but also for reviews of non-randomised (observational and diagnostic) studies assessing the benefits and harms of interventions.
PRISMA reporting guidance has been continuously developing (see [26]) and several extensions have been published so far, including PRISMA-Equity [27] an extension for abstracts [28] and a PRISMA for protocols [29, 30].
Along with its use by review authors as a pre-submission checklist, PRISMA is used also by journal editors and peer-reviewers to improve reporting standards across medical and general journals [31]. PRISMA has been widely accepted and endorsed by 5 editorial organisations, including Cochrane and the World Association of Medical Editors, and 180 bio-medical journals [32]. To assure global acceptance, the PRISMA statement has been published in multiple biomedical journals and translated into a number of other languages. Additionally, the checklist and flow diagram have been translated into a number of other languages, including Russian, Japanese and Korean [33]. Recently, as awareness of PRISMA has grown, reviewers have also looked to the PRISMA statement and checklist as a form of guidance. Some 25% of reviews in the field of marine biology were found by O’Leary et al. [34] to refer to PRISMA as guidelines used to structure their conduct. Whilst PRISMA is, strictly speaking, a set of reporting standards and not true systematic review guidance, this demonstrates the appeal of systems like PRISMA in acting not only as a reporting standard but also a primer to systematic review conduct.