Evidence maps and evidence gaps: evidence review mapping as a method for collating and appraising evidence reviews to inform research and policy
© The Author(s) 2017
Received: 12 January 2017
Accepted: 11 July 2017
Published: 17 July 2017
Evidence reviews are a key mechanism for incorporating extensive, complex and specialised evidence into policy and practice, and in guiding future research. However, evidence reviews vary in scope and methodological rigour, creating several risks for decision-makers: decisions may be informed by less reliable reviews; apparently conflicting interpretations of evidence may obfuscate decisions; and low quality reviews may create the perception that a topic has been adequately addressed, deterring new syntheses (cryptic evidence gaps). We present a new approach, evidence review mapping, designed to produce a visual representation and critical assessment of the review landscape for a particular environmental topic or question. By systematically selecting and describing the scope and rigour of each review, this helps guide non-specialists to the most relevant and methodologically reliable reviews. The map can also direct future research through the identification of evidence gaps (whether cryptic or otherwise) and redundancy (multiple reviews on similar questions). We consider evidence review mapping a complementary approach to systematic reviews and systematic maps of primary literature and an important tool for facilitating evidence-based decision-making and research efficiency.
KeywordsCEESAT Evidence-based policy Evidence review map Gap analysis Review evaluation Research synthesis Research methods
Scientific evidence is central to effective environmental policymaking and practice but its use requires an appreciation of the reliability of the evidence base. Primary research forms the backbone of an evidence base; however, non-specialists may lack the resources or expertise to evaluate the appropriateness of methodology and data analysis in primary studies, and to identify trends and patterns across multiple studies. Furthermore, the inherent complexity and variability of natural systems combined with differences in study methods typically generates findings that can be selectively used to support particular conclusions [1, 2]. Against this backdrop, non-specialists seeking an overview of particular topics (e.g. decision-makers and researchers in other fields) are increasingly likely to rely on evidence reviews that synthesise evidence across the spectrum of primary literature related to a specific, policy-relevant question [3–6]. Evidence reviews (hereafter also referred to as ‘reviews’) attempt to answer a specific question by aggregating and synthesising the results of primary studies and may include meta-analysis (statistical methods for combining the magnitude of the outcomes [effect sizes] across different data sets addressing the same research question ) and/or narrative synthesis (use of prose to summarise and draw conclusions from primary research which may be supplemented by the reviewers’ own experience and may include limited quantitative analysis ). Evidence reviews may or may not be conducted within the framework of systematic review methodology . Reviews that only collect and configure the primary literature with respect to a broad question, such as systematic maps, are not considered as an evidence review in this context.
decisions may be informed by less rigorous and/or biased reviews because of a lack of systematic collation of reviews and subsequent appraisal of review quality;
apparently conflicting interpretations of evidence among reviews with similar scope may obfuscate decisions; and
no new or updated reviews are conducted on a topic, because researchers and decision-makers are unaware that the topic lacks a highly rigorous synthesis (cryptic evidence gaps ).
Clearly communicating the scope and reliability of an evidence-base to decision-makers and other end-users is therefore essential to ensure potential limitations in the conduct of the review(s) being considered are appreciated. However, in the absence of communication mechanisms tailor-made for use within decision-making processes this can be challenging. Indeed, difficulties in locating relevant evidence and assessing the reliability of information gathered are often amongst the main concerns highlighted by decision-makers .
Systematic review and systematic map methodologies were developed in part in recognition of the variable reliability of reviews [14, 15] and as an attempt to reduce these risks and provide high quality evidence synthesis and overviews for decision-makers. Nonetheless, while systematic reviews are becoming more widespread in the environmental sector, not all conform to recognised standards (e.g. ) and non-systematic evidence reviews still dominate the review landscape . Moreover, in the medical sector, where systematic review terminology was coined and application of the methodology is most widespread [14, 15], the exponential rise in systematic reviews and meta-analyses have been dogged by criticisms that many do not follow full systematic review guidelines, are conflicted by pre-conceived opinions or financial motivations of the authors, and/or have been used to advance industry interests instead of good science . With awareness and application of systematic review methodology expanding in the environmental sector, it is vital that ways to maintain and monitor methodological standards are developed and applied to ensure the objectivity, robustness and value of evidence reviews for decision-makers.
Tools for critically assessing the methodological reliability of individual reviews have developed in many sectors (see  and references therein), although limited techniques exist to meaningfully apply and integrate these assessment protocols to inform environmental policy. Similarly, in other sectors, methods to inform non-specialists on available evidence have developed (e.g. ), but often focus on synthesising findings from the systematic review literature , or on describing and appraising systematic reviews and related impact evaluations [22, 23]. None are designed to describe and critically appraise the evidence review literature as a whole or explicitly consider studies that evaluate environmental outcomes arising from interventions. Accordingly, new methods to assess and communicate the reliability (including limitations of evidence and methodological rigour) and scope of all reviews, systematic and non-systematic, could represent a more viable alternative for summarising evidence reviews in the environmental sector. In response, we develop a method that we term ‘evidence review mapping’, to produce a critical overview of all reviews examining the effectiveness of a given intervention and/or impacts of human pressures and management (e.g. effects of fisheries, impacts of land-use change, effectiveness of conservation interventions, etc.). This overview includes a systematic assessment of the questions addressed by each review (i.e. scope and relevance) combined with a critical appraisal of review methods (reliability and risk of bias). Outputs from evidence review mapping are designed specifically to inform non-specialists and improve communication of the evidence base by identifying the most relevant and reliable reviews, and to assist future syntheses by highlighting gaps and redundancy (multiple reviews on similar topics) in the review literature. Evidence review maps are tailor-made for the environmental decision-making community, offering a communication tool that consists of matrices that summarise the quantity and methodological rigour of reviews on a range of related questions, together with a series of supporting tables that provide more detailed information on the reviews for each particular question. Evidence review maps therefore do not aim to answer a specific question but rather intend to enable end-users to quickly assess the volume of evidence on the question(s) of interest, and to obtain an overview of how reliable that evidence is.
Here, we describe how to construct evidence review maps to inform environmental policy and research, providing examples with reference to a study we undertook in conjunction with developing this methodology . We propose that evidence review mapping offers a complementary approach to systematic reviews and systematic maps, and suggest that adoption of the methodology will facilitate evidence-based policy and practice in conservation and environmental science.
Methodology—evidence review mapping
Our approach to evidence review mapping consists of the following steps: (1) define the overall question of interest, construct a series of more refined questions that consider key aspects of the overall question, and then design the search strategy; (2) systematically search and screen for relevant evidence reviews; (3) assess the scope of each evidence review against the questions defined in step 1; (4) critically appraise the methods of each evidence review using a standardised protocol (we use the Collaboration for Environmental Evidence Synthesis Assessment Tool—CEESAT ); and (5) construct the evidence review map(s). The production of an evidence review map integrates some core systematic review methods complemented by several novel approaches designed specifically to search for, collate, categorise, and communicate review articles. We provide a description of each stage below illustrated, where appropriate, with details adapted from Woodcock et al.  who examined the evidence review landscape for the question ‘What is the effectiveness of marine protected areas as a tool for mitigating the impacts of fisheries on biodiversity?’
1. Define the question of interest, construct a series of more refined questions, and then design the search strategy
To provide the framework for the evidence review map, the overall question of interest (i.e. scope of the map) should be established. As with systematic review methodology the question will largely determine the inclusion criteria for the reviews that form the evidence review map. Consequently, we recommend that a population, intervention/exposure, comparator, outcome (PI/ECO) structure is used to ensure a clearly defined question is developed .
Together with the question, it is important to explicitly document the criteria for deciding on whether or not articles are relevant to include to ensure objectivity, transparency and repeatability during article screening. Once these have been defined, an appropriate search strategy should be developed and detailed within an a priori protocol. The search strategy should draw on search methods used for systematic reviews  with the search effort depending on the scale of the evidence review map, the volume of subject-specific evidence, and the resources available. Topic-specific search strings should then be narrowed to focus on review articles using terms such as ‘AND (review OR “meta-analy*” OR synthes*)’ or using appropriate database filters for ‘review articles’ if available and known to be reliable, and the databases that will be searched should be documented (see  for further information on search strategy design and reporting). Note that in sectors where systematic reviews are more widespread, search filters designed to retrieve research by study design or focus have been heavily invested in (e.g. https://sites.google.com/a/york.ac.uk/issg-search-filters-resource/home). While similar filters exist within search engines commonly used by the environmental sector (e.g. Web of Science, Scopus) this functionality is less well-developed and database-specific so caution is recommended before relying solely on their use.
2. Systematically search and screen for relevant evidence reviews
Example evidence review map supporting tables (a) list of reviews assessed as relevant for inclusion, with review score and the identifier assigned to each individual review (either a number for meta-analyses, or a letter for narrative syntheses), (b) scope of meta-analyses that examine broad questions: region, taxa, MPA characteristic and outcome measure, and (c) scope of narrative syntheses that examine the specific question: broad focus and region.
Example adapted from Woodcock et al. 
(a) Reviews assessed
Meta-analytical reference 1
Meta-analytical reference 2
Narrative reference a
Narrative reference b
(b) Scope of meta-analyses
3, 6–9, 11–14, 16, 17
Fishii, iii, iv
1–3, 5, 6, 8, 10–18
Invertebrateii, iii, iv
6, 8, 15, 16, 18
1–3, 5, 6, 8, 10, 11, 15, 17, 18
1–3, 5, 7, 10–13, 15, 17, 18
Biomassvii, viii, ix
4–8, 17, 18
(c) Scope of narrative syntheses
b, c, f, h, i
b, e, f, g, h, i
3. Assess evidence review scope
Constructing an evidence review map requires that each relevant review is systematically categorised according to the question(s) addressed (as defined in step 1) and the type of synthesis undertaken [e.g. narrative/qualitative (which may include limited quantitative analyses) or meta-analyses; see  for definitions of each]. Note that multiple questions are often addressed within a single review, and so a single review may be included several times in an evidence review map. The extent to which scope can be objectively categorised is influenced by the methods employed in the review. Whilst a meta-analysis can usually be objectively categorised as addressing a particular question based on whether or not effect sizes are presented, there is no such obvious distinction in many narrative reviews, in which questions could be addressed through varying amounts of text with varying degrees of relevance and supporting references. This problem is exacerbated because the scope of narrative reviews is often broader than meta-analytical reviews. Reliable categorisation of scope is thus possible in greater detail for meta-analyses than for narrative syntheses. The assessment of review scope should therefore be undertaken in two parts, firstly considering reviews that apply meta-analytical techniques and secondly reviews that use narrative synthesis.
Categorisation of meta-analyses as addressing particular questions requires effect sizes to be quoted directly, presented graphically or used in statistical tests of relationships . Instances where relevant terms are included as potential confounding variables but statistics (e.g. effect sizes) are not reported would not be considered as directly addressing a given question . A threshold for the minimum number of primary studies a meta-analysis must contain to be categorised as addressing a particular question could be set. The minimum threshold is highly context-specific (e.g. relating to the quality of primary research, typical effect sizes and variances, etc.) and consequently requires a transparent case-by-case judgement for each evidence review map. Where a threshold is considered appropriate, reviews that do not meet this threshold should be noted as partially addressing the question, thereby allowing articles that are based on a small volume of primary research to be identified.
4. Critically appraise the methods of each evidence review using a standardised protocol
The assessment of review methodology forms the penultimate stage in evidence review map development. A standardised protocol designed to assess the reliability of environmental evidence reviews should be utilised to critically appraise the methodological rigour of each relevant review in a consistent manner. For this purpose we recommend the Collaboration for Environmental Evidence Synthesis Assessment Tool (CEESAT; ). The current version of CEESAT (available at http://www.environmentalevidence.org/review-appraisals) consists of 13 criteria relating to the reliability (combining objectivity, transparency, and comprehensiveness) of reviews (see  for details), and achieves good repeatability when independent assessments of the same review are compared [6, 12, 19]. For each criterion, reviews receive 3 points, 1 point, or 0 points. Scores therefore range from 0 to 39: the higher the score, the greater the confidence that the review methodology is robust and reliable in terms of repeatability and risk of bias. Importantly, while certain criteria within CEESAT require statistical analysis to score highly, points for these criteria are available to narrative syntheses [6, 19]. Furthermore, high scores are available equally to narrative syntheses and meta-analyses for most of the criteria enabling narrative syntheses to be assessed as having high reliability (a score of 26.5+) where appropriate (e.g. see ). Reviews should be independently scored by two assessors and the repeatability of the assessment evaluated with a weighted kappa test of agreement [24, 25] or similar to take into account the magnitude of any disagreements, e.g. a 1-0 disagreement is ranked as magnitude 1, whereas a 3-0 disagreement is ranked as magnitude 3 [10, 28]. Disagreements between assessors should be discussed and where these reflect uncertainty over whether or not a criterion was met, the average score from the two assessors should be used.
There are a number of possible approaches to interpreting CEESAT scores (see  for further discussion) however, we currently recommend dividing total CEESAT scores into three categories 0–13, 13.5–26 and 26.5+, loosely representing low, intermediate/moderate and high methodological reliability. The boundaries for these categories reflect an average score across the 13 criteria of 0–1, 1–2 and 2–3. Note that while these boundaries may change as further guidance on scoring interpretation becomes available, or if certain aspects of review conduct are prioritised by those conducting an evidence review map, the methodology for incorporating scores into evidence review mapping will remain valid.
5. Construct the evidence review map
Finally, using information from steps 3 and 4, a series of evidence review maps may be constructed to visually represent the review landscape for the overall question of interest (Fig. 3). Separate maps should be constructed to describe meta-analyses and narrative reviews to ensure similar levels of objectivity in review categorisation within each map. Evidence review maps should be constructed using refined questions as defined in step 1 for meta-analyses and those determined in steps 1 and/or 3 for narrative reviews. Note that for the example here, none of the narrative syntheses provided sufficient information to score highly when assessed with CEESAT (e.g. see ) and, as a consequence, the narrative evidence review map shows all reviews to be of low reliability (Fig. 3b). This reflects the specific evidence base for MPA effectiveness rather than being a consequence of differences in the way in which CEESAT evaluates narrative syntheses vs. meta-analyses .
Evidence review maps consist of a matrix that combines information on the number of reviews addressing a given question and the methodological rigour of each review, enabling end-users to see what evidence there is on the question(s) they are interested in. The matrix overview is supported by a series of tables that allow the most rigorous reviews on each question to be identified. The matrix should be read using combinations from the top and left headings to form a particular question. Doughnut pie charts can be created to represent (1) the total number of reviews that address each individual question (included in the centre of the doughnut pie) and (2) the proportion of those reviews that are of high, medium or low methodological reliability. Symbols should be used to identify where reviews have been categorised as partially addressing a particular question (due to the threshold for number of included primary research articles not being met). The format of the matrix means that some questions will not be applicable; these areas should be left blank. Full details of reviews included for each specific question, together with details of any reviews that partially address a given question should then be provided in a series of supporting tables (e.g. see Table 1b, c; ).
Supporting tables should include: (1) details for the search strategy; (2) a list of relevant reviews with their unique identifier and review score; (3) a list of excluded studies with reasons for exclusion; and (4) a series of tables detailing the meta-analyses and narrative syntheses examining each refined question, designed to direct end-users to the most relevant and rigorous review for their requirements.
Understanding the reliability of an evidence base is central to effective decision-making and developing mechanisms for communicating this to decision-makers is therefore essential. While systematic review methodology is considered a key tool for unbiased evidence synthesis, the reliability of evidence reviews will continue to vary for many reasons . With the number of reviews of all types continuing to increase, evidence review maps provide the opportunity to visualise the review landscape for an overall question of interest and to guide non-specialists to more relevant and reliable reviews. We consider evidence review mapping a complementary approach to systematic reviews and systematic maps and an important tool for facilitating evidence-based decision-making.
Evidence review mapping relies on systematic searching, transparent decisions on article inclusion and exclusion, objective assessment of review scope and a standardised and repeatable protocol for critically appraising individual reviews. Application of our approach has illustrated the variable scope and reliability of published evidence reviews and the need to ensure non-specialists can locate the most relevant and rigorous reviews on particular questions of interest, as well as indicating how planned reviews can be designed to complement the existing body of reviews . We believe this approach and its outputs will be useful to decision-makers, advisors and knowledge brokers wishing to use evidence in environmental policy and practice, as well as to researchers looking to contribute to the evidence base through targeted evidence synthesis. Our approach to evidence review mapping could be applied widely to many important questions in environmental policy, as an ‘evidence service’ with considerable benefits for research efficiency and evidence-based policy.
Considerations for conducting evidence review maps
While evidence review mapping is a valuable tool, it will pose some challenges to those wishing to construct such maps. Most notably, decisions over whether or not reviews are relevant for inclusion require subjective judgement. This difficulty arises particularly in narrative syntheses, because there is a continuum between studies that exclusively review the findings from relevant primary research versus studies that have a very broad scope or a more conceptual focus and are therefore less appropriate for evidence review mapping. Because most subjective decisions on relevance relate to narrative reviews, altering these decisions would not affect the meta-analytical evidence review map and, while they might adjust the average narrative review score for a given question, they are unlikely to markedly change the conclusions on review rigour and scope. Nonetheless, ensuring transparency of decisions at all stages of evidence review mapping by documentation in the supporting tables is an important component of the methodology to enable end-users to understand and challenge the decisions made over article inclusion and categorisation. Additionally, some methodologically distinct forms of review, such as qualitative syntheses or mixed methods may not be suitable for appraisal using CEESAT and including such reviews will require further research and development.
Evidence review maps rely on the use of a standardised scoring tool to assess the reliability of reviews. Like other scoring tools, CEESAT assesses the likelihood that a review is reliable on the basis of key attributes relating to available evidence, conduct and reporting standards, and does not guarantee the reliability of a review against other factors such as author errors. In addition, total scores of reviews can mask specific strengths or weaknesses across criteria. A breakdown of scoring across individual criteria may therefore be a useful subsequent output to ensure that decision-makers can gauge the extent to which the strengths and weaknesses of a review make it suitable for the intended use (see  for a detailed discussion of important caveats in applying and interpreting CEESAT scores). Researchers who wish to undertake evidence review mapping may wish to use alternative boundaries and/or weightings of criteria to represent reliability than those suggested here if certain aspects of review conduct are viewed as particularly important to the end-user. In such instances, clear rationale for amending these boundaries and/or weightings should be provided as part of the evidence review map.
There may be instances in which more than one high scoring review addresses a particular question. In such situations, further assessment could consider the consistency in findings between reviews (noting that direct comparisons of specific results can be misleading if subtle differences in review scope are not identified). If results differ between reviews, potential reasons for ambiguity could then be considered, and further work targeted to examine the evidence base where reasons for discrepancy are unclear. In the latter situation, systematic reviews, containing meta-analytical techniques wherever possible, and/or targeted and well-designed primary research are recommended to ensure that policymaking is informed by reliable evidence that is robust and methodologically rigorous .
Finally, note that unlike ‘review of reviews’ that aim to provide a synthesis of evidence from more than one review, evidence review maps do not set out to answer a specific question but rather seek to provide an overview of the existing review evidence base. Consequently, maps are intended to guide decision-makers to relevant information and illustrate strengths and weaknesses in the evidence base, rather than to directly provide policy recommendations or guidelines. Future work that may add value to evidence review maps might include developing user-friendly summaries on included reviews, or reports summarising the findings of the evidence review map together with implications for policy and research.
As the review literature continues to expand, it will become increasingly difficult for non-specialists to locate all relevant evidence reviews. Furthermore, when selecting reviews to inform decision-making, non-specialists may lack the resources to critically appraise all available syntheses and may instead treat all evidence reviews equally, or use measures of review rigour that are questionable and/or subjective (e.g. journal impact factor, citation count, author reputation). However, we, and others (e.g. [6, 9, 12, 30]), have found that published evidence reviews in the environmental sector vary considerably in reliability and scope, which presents challenges to those wishing to undertake evidence-based decision-making. We therefore propose that evidence review mapping represents an important method for communicating the reliability and scope of all reviews on a particular topic to non-specialists, thereby facilitating evidence-based policy and practice in conservation and environmental science.
Led the research: BCO, PW, ASP. Wrote and reviewed the manuscript: BCO, PW, MJK, ASP. All authors read and approved the final manuscript.
We thank our potential end-users Ally Dingwall (Sainsbury’s), Tom Pickerell (Seafish, Seafood Watch), Jon Harman (Seafish), Mike Mitchell, David Parker (Young’s Seafood), David Jarrad (Shellfish Association of Great Britain) for their contribution to discussions regarding review reliability. This project was supported in part by a UK Natural Environmental Research Council Knowledge Exchange Grant NE/J006386/1.
The authors declare that they have no competing interests.
Availability of data and materials
All data generated or analysed during this study are included in the published article Woodcock, P., O’Leary, B.C., Kaiser, M.J., Pullin, A.S. (in press) Your evidence or mine? Systematic evaluation of reviews of marine protected area effectiveness. Fish and Fisheries. Doi: 10.1111/faf.12196 (and its Additional files).
This work was supported in part by a UK Natural Environmental Research Council Knowledge Exchange Grant NE/J006386/1. No other grants from funding agencies in the public, commercial, or not-for-profit sectors were received.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Egger M, Smith GD. Bias in location and selection of studies. BMJ. 1998;316:61–6.View ArticleGoogle Scholar
- Schott GD. The reference: more than a buttress of the scientific edifice. J R Soc Med. 2003;96:191–3.View ArticleGoogle Scholar
- Seavy NE, Howell CA. How can we improve information delivery to support conservation and restoration decisions? Biodivers Conserv. 2010;19:1261–7.View ArticleGoogle Scholar
- Cook CN, Carter RW, Fuller RA, Hockings M. Managers consider multiple lines of evidence important for biodiversity management decisions. J Environ Manage. 2012;113:341–6.View ArticleGoogle Scholar
- Pullin AS, Knight AT, Stone DA, Charman K. Do conservation managers use scientific evidence to support their decision-making? Biol Conserv. 2004;119:245–52.View ArticleGoogle Scholar
- O’Leary BC, Kvist K, Bayliss HR, Derroire G, Healey JR, Hughes K, et al. The reliability of evidence review methodology in environmental science and conservation. Environ Sci Policy. 2016;64:75–82.View ArticleGoogle Scholar
- Koricheva J, Gurevitch J, Mengersen K. Handbook of meta-analysis in ecology and evolution. Princeton: Princeton University Press; 2013.View ArticleGoogle Scholar
- CEE, Guidelines for systematic review and evidence synthesis in environmental management. Version 4.2. Environmental evidence. 2013.Google Scholar
- Roberts PD, Stewart GB, Pullin AS. Are review articles a reliable source of evidence to support conservation and environmental management? A comparison with medicine. Biol Conserv. 2006;132(4):409–23.View ArticleGoogle Scholar
- Shea BJ, Bouter LM, Peterson J, Boers M, Andersson N, Ortiz Z, et al. External validation of a measurement tool to assess systematic reviews. PLoS ONE. 2007;2:e1350.View ArticleGoogle Scholar
- Philibert A, Loyce C, Makowski D. Assessment of the quality of meta-analysis in agronomy. Agric Ecosyst Environ. 2012;148:72–82.View ArticleGoogle Scholar
- Woodcock P, O’Leary BC, Kaiser MJ, Pullin AS. Your evidence or mine? Systematic evaluation of reviews of marine protected area effectiveness. Fish Fish. 2017;18(4):668–81.View ArticleGoogle Scholar
- Holmes J, Clark R. Enhancing the use of science in environmental policy-making and regulation. Environ Sci Policy. 2008;11:702–11.View ArticleGoogle Scholar
- Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997;126:376–80.View ArticleGoogle Scholar
- Chalmers I, Hedges LV, Cooper H. A brief history of research synthesis. Eval Health Prof. 2002;25:12–37.View ArticleGoogle Scholar
- O’Leary BC, Bayliss HR, Haddaway NR. Beyond PRISMA: systematic reviews to inform marine science and policy. Mar Policy. 2015;62:261–3.View ArticleGoogle Scholar
- Haddaway NR, Woodcock P, Macura B, Collins A. Making literature reviews more reliable through application of lessons from systematic reviews. Conserv Biol. 2015;29(6):1596–605.View ArticleGoogle Scholar
- Ioannidis JPA. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94:485–514.View ArticleGoogle Scholar
- Woodcock P, Pullin AS, Kaiser MJ. Evaluating and improving the reliability of evidence syntheses in conservation and environmental science: a methodology. Biol Conserv. 2014;176:54–62.View ArticleGoogle Scholar
- Miake-Lye IM, Hempel S, Shanman R, Shekelle PG. What is an evidence map? A systematic review of published evidence maps and their definitions, methods and products. Syst Rev. 2016;5:28.View ArticleGoogle Scholar
- Caird J, Sutcliffe K, Kwan I, Dickson K, Thomas J. Mediating policy-relevant evidence at speed: are systematic reviews of systematic reviews a useful approach. Evid Policy. 2015;11:81–7.View ArticleGoogle Scholar
- Snilstveit B, Vojtkova M, Bhavsar A, Gaarder M. Evidence gap maps—a tool for promoting evidence-informed policy and prioritizing future research. World bank policy research working paper no. 6725. 2013.Google Scholar
- McKinnon MC, Cheng SH, Garside R, Masuda YJ, Miller DC. Sustainability: map the evidence. Nature. 2015;528:185–7.View ArticleGoogle Scholar
- Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20:37–46.View ArticleGoogle Scholar
- Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1997;33:159–74.View ArticleGoogle Scholar
- Ojeda-Martinez C, Bayle-Sempere JT, Sanchez-Jerez P, Forcada A, Valle C. Detecting conservation benefits in spatially protected fish populations with meta-analysis of long term monitoring data. Mar Biol. 2007;151:1153–61.View ArticleGoogle Scholar
- Vandeperre F, Higgins RM, Sanchez-Meca J, Maynou F, Goni R, Martin-Sosa P, et al. Effects of no-take area size and age of marine protected areas on fisheries yields: a meta-analytical approach. Fish Fish. 2011;12:412–26.View ArticleGoogle Scholar
- Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005;37:360–3.Google Scholar
- Pullin AS, Knight TM. Doing more good than harm—building an evidence-base for conservation and environmental management. Biol Conserv. 2009;142:931–4.View ArticleGoogle Scholar
- Huntington BE. Confronting publication bias in marine reserve meta-analyses. Front Ecol Environ. 2011;9:375–6.View ArticleGoogle Scholar