Skip to main content

Evidence maps and evidence gaps: evidence review mapping as a method for collating and appraising evidence reviews to inform research and policy

Abstract

Evidence reviews are a key mechanism for incorporating extensive, complex and specialised evidence into policy and practice, and in guiding future research. However, evidence reviews vary in scope and methodological rigour, creating several risks for decision-makers: decisions may be informed by less reliable reviews; apparently conflicting interpretations of evidence may obfuscate decisions; and low quality reviews may create the perception that a topic has been adequately addressed, deterring new syntheses (cryptic evidence gaps). We present a new approach, evidence review mapping, designed to produce a visual representation and critical assessment of the review landscape for a particular environmental topic or question. By systematically selecting and describing the scope and rigour of each review, this helps guide non-specialists to the most relevant and methodologically reliable reviews. The map can also direct future research through the identification of evidence gaps (whether cryptic or otherwise) and redundancy (multiple reviews on similar questions). We consider evidence review mapping a complementary approach to systematic reviews and systematic maps of primary literature and an important tool for facilitating evidence-based decision-making and research efficiency.

Background

Scientific evidence is central to effective environmental policymaking and practice but its use requires an appreciation of the reliability of the evidence base. Primary research forms the backbone of an evidence base; however, non-specialists may lack the resources or expertise to evaluate the appropriateness of methodology and data analysis in primary studies, and to identify trends and patterns across multiple studies. Furthermore, the inherent complexity and variability of natural systems combined with differences in study methods typically generates findings that can be selectively used to support particular conclusions [1, 2]. Against this backdrop, non-specialists seeking an overview of particular topics (e.g. decision-makers and researchers in other fields) are increasingly likely to rely on evidence reviews that synthesise evidence across the spectrum of primary literature related to a specific, policy-relevant question [3,4,5,6]. Evidence reviews (hereafter also referred to as ‘reviews’) attempt to answer a specific question by aggregating and synthesising the results of primary studies and may include meta-analysis (statistical methods for combining the magnitude of the outcomes [effect sizes] across different data sets addressing the same research question [7]) and/or narrative synthesis (use of prose to summarise and draw conclusions from primary research which may be supplemented by the reviewers’ own experience and may include limited quantitative analysis [6]). Evidence reviews may or may not be conducted within the framework of systematic review methodology [8]. Reviews that only collect and configure the primary literature with respect to a broad question, such as systematic maps, are not considered as an evidence review in this context.

High quality evidence reviews can provide decision-makers (or their advisors) with quick and easy access to available evidence on a topic or question of interest. However, as with primary research, reviews can differ in the rigour of the methods and the reliability of findings (e.g. [6, 9,10,11,12]), and subtle differences in scope may influence their applicability to a particular problem. Indeed, the majority of reviews in environmental science are not conducted according to predefined guidelines, but instead apply a range of methods that each promote or compromise reliability to varying degrees (e.g. [6]). Differences in methodological reliability amongst reviews on similar topics create several related risks for researchers and decision-makers:

  1. (i)

    decisions may be informed by less rigorous and/or biased reviews because of a lack of systematic collation of reviews and subsequent appraisal of review quality;

  2. (ii)

    apparently conflicting interpretations of evidence among reviews with similar scope may obfuscate decisions; and

  3. (iii)

    no new or updated reviews are conducted on a topic, because researchers and decision-makers are unaware that the topic lacks a highly rigorous synthesis (cryptic evidence gaps [12]).

Clearly communicating the scope and reliability of an evidence-base to decision-makers and other end-users is therefore essential to ensure potential limitations in the conduct of the review(s) being considered are appreciated. However, in the absence of communication mechanisms tailor-made for use within decision-making processes this can be challenging. Indeed, difficulties in locating relevant evidence and assessing the reliability of information gathered are often amongst the main concerns highlighted by decision-makers [13].

Systematic review and systematic map methodologies were developed in part in recognition of the variable reliability of reviews [14, 15] and as an attempt to reduce these risks and provide high quality evidence synthesis and overviews for decision-makers. Nonetheless, while systematic reviews are becoming more widespread in the environmental sector, not all conform to recognised standards (e.g. [16]) and non-systematic evidence reviews still dominate the review landscape [17]. Moreover, in the medical sector, where systematic review terminology was coined and application of the methodology is most widespread [14, 15], the exponential rise in systematic reviews and meta-analyses have been dogged by criticisms that many do not follow full systematic review guidelines, are conflicted by pre-conceived opinions or financial motivations of the authors, and/or have been used to advance industry interests instead of good science [18]. With awareness and application of systematic review methodology expanding in the environmental sector, it is vital that ways to maintain and monitor methodological standards are developed and applied to ensure the objectivity, robustness and value of evidence reviews for decision-makers.

Tools for critically assessing the methodological reliability of individual reviews have developed in many sectors (see [19] and references therein), although limited techniques exist to meaningfully apply and integrate these assessment protocols to inform environmental policy. Similarly, in other sectors, methods to inform non-specialists on available evidence have developed (e.g. [20]), but often focus on synthesising findings from the systematic review literature [21], or on describing and appraising systematic reviews and related impact evaluations [22, 23]. None are designed to describe and critically appraise the evidence review literature as a whole or explicitly consider studies that evaluate environmental outcomes arising from interventions. Accordingly, new methods to assess and communicate the reliability (including limitations of evidence and methodological rigour) and scope of all reviews, systematic and non-systematic, could represent a more viable alternative for summarising evidence reviews in the environmental sector. In response, we develop a method that we term ‘evidence review mapping’, to produce a critical overview of all reviews examining the effectiveness of a given intervention and/or impacts of human pressures and management (e.g. effects of fisheries, impacts of land-use change, effectiveness of conservation interventions, etc.). This overview includes a systematic assessment of the questions addressed by each review (i.e. scope and relevance) combined with a critical appraisal of review methods (reliability and risk of bias). Outputs from evidence review mapping are designed specifically to inform non-specialists and improve communication of the evidence base by identifying the most relevant and reliable reviews, and to assist future syntheses by highlighting gaps and redundancy (multiple reviews on similar topics) in the review literature. Evidence review maps are tailor-made for the environmental decision-making community, offering a communication tool that consists of matrices that summarise the quantity and methodological rigour of reviews on a range of related questions, together with a series of supporting tables that provide more detailed information on the reviews for each particular question. Evidence review maps therefore do not aim to answer a specific question but rather intend to enable end-users to quickly assess the volume of evidence on the question(s) of interest, and to obtain an overview of how reliable that evidence is.

Here, we describe how to construct evidence review maps to inform environmental policy and research, providing examples with reference to a study we undertook in conjunction with developing this methodology [12]. We propose that evidence review mapping offers a complementary approach to systematic reviews and systematic maps, and suggest that adoption of the methodology will facilitate evidence-based policy and practice in conservation and environmental science.

Methodology—evidence review mapping

Our approach to evidence review mapping consists of the following steps: (1) define the overall question of interest, construct a series of more refined questions that consider key aspects of the overall question, and then design the search strategy; (2) systematically search and screen for relevant evidence reviews; (3) assess the scope of each evidence review against the questions defined in step 1; (4) critically appraise the methods of each evidence review using a standardised protocol (we use the Collaboration for Environmental Evidence Synthesis Assessment Tool—CEESAT [19]); and (5) construct the evidence review map(s). The production of an evidence review map integrates some core systematic review methods complemented by several novel approaches designed specifically to search for, collate, categorise, and communicate review articles. We provide a description of each stage below illustrated, where appropriate, with details adapted from Woodcock et al. [12] who examined the evidence review landscape for the question ‘What is the effectiveness of marine protected areas as a tool for mitigating the impacts of fisheries on biodiversity?’

1. Define the question of interest, construct a series of more refined questions, and then design the search strategy

To provide the framework for the evidence review map, the overall question of interest (i.e. scope of the map) should be established. As with systematic review methodology the question will largely determine the inclusion criteria for the reviews that form the evidence review map. Consequently, we recommend that a population, intervention/exposure, comparator, outcome (PI/ECO) structure is used to ensure a clearly defined question is developed [8].

Evidence review mapping uses a hierarchy of questions to assess the scope of reviews: the overall question sets the scope of the map and a series of more refined questions then establish the key areas of interest within the overall question. More refined questions may be separated into broad questions (considering one key area of interest) and specific questions (considering paired combinations of key areas, based on the population, the intervention/exposure, and the outcome metrics). Figure 1 describes the question hierarchy framework illustrated with example questions from Woodcock et al. [12]. Compiling a comprehensive list of key areas of interest and their pairwise combinations generates the framework for the evidence review map (Fig. 2). The number of refined questions represents a pragmatic trade-off between capturing the many potential influences on the overall question of interest and generating an unmanageably large number of specific questions. The scope of the evidence review map will therefore partly depend on the resources available. In addition, while researchers may undertake evidence review mapping independently to guide future studies, maps that are designed to inform policy should develop the evidence review map framework in consultation with stakeholders in order to ensure relevance through appropriate selection of the population, intervention/exposure, and outcome metrics to consider.

Fig. 1
figure 1

Question hierarchy framework used to assess the scope of reviews for evidence review mapping. Descriptions of the purpose of each question level are provided to the left of the triangle. Example questions based on Woodcock et al. [12] illustrating each level of questioning are provided to the right of the triangle

Fig. 2
figure 2

(Example adapted from Woodcock et al. [12])

Schematic illustrating the process of constructing the framework for an evidence review map. Example evidence review map explores the effectiveness of marine protected areas (MPAs) for biodiversity conservation. Key components of the overall question are identified in the left panel (e.g. population: regional and taxonomic focus, intervention: aspects of MPA design considered, outcome: outcome metrics used to assess MPA effectiveness). In the panel on the right, these components are combined to construct the framework for the evidence review map consisting of broad (dark grey boxes) and specific (light grey boxes) questions. White boxes indicate questions that are not applicable, e.g. global/temperate question combinations. Abbreviations in headings refer to: Taxa—Invert invertebrates, MPA Char MPA characteristics, outcome measures—Abund abundance

Together with the question, it is important to explicitly document the criteria for deciding on whether or not articles are relevant to include to ensure objectivity, transparency and repeatability during article screening. Once these have been defined, an appropriate search strategy should be developed and detailed within an a priori protocol. The search strategy should draw on search methods used for systematic reviews [8] with the search effort depending on the scale of the evidence review map, the volume of subject-specific evidence, and the resources available. Topic-specific search strings should then be narrowed to focus on review articles using terms such as ‘AND (review OR “meta-analy*” OR synthes*)’ or using appropriate database filters for ‘review articles’ if available and known to be reliable, and the databases that will be searched should be documented (see [8] for further information on search strategy design and reporting). Note that in sectors where systematic reviews are more widespread, search filters designed to retrieve research by study design or focus have been heavily invested in (e.g. https://sites.google.com/a/york.ac.uk/issg-search-filters-resource/home). While similar filters exist within search engines commonly used by the environmental sector (e.g. Web of Science, Scopus) this functionality is less well-developed and database-specific so caution is recommended before relying solely on their use.

2. Systematically search and screen for relevant evidence reviews

A systematic search and screening process to identify relevant articles should be undertaken in line with systematic review guidelines [8], with searches comprehensively documented and the repeatability of inclusion decisions during screening tested using a kappa test of agreement [24, 25] or similar. Inclusion criteria should be refined as necessary to ensure repeatability [8]. Articles assessed for relevance at full text should be clearly documented in the supporting tables to the map (see step 5 and Table 1a–c) with the reasons for exclusion provided where appropriate to maintain transparency. Importantly, in some instances, reviews that are excluded may contain related information of potential interest (e.g. outside the taxonomic or geographic scope of the evidence review map), or partially consider some questions of interest as part of a broader-ranging review with a different scope (this can occur particularly for narrative reviews). Note also that some studies use meta-analytical techniques in the analysis of long-term primary data (e.g. [26]) or data from selected case studies (e.g. [27]) rather than with the aim of comprehensively synthesising published research. As these would not be expected to follow all of the methods required to produce a rigorous review of primary research (e.g. a priori protocol, comprehensive searching, screening), they are unsuitable for evaluation using CEESAT. However, marking such reviews as ‘borderline relevant’ can assist decision-makers seeking additional information. Included articles can be assigned a number (meta-analyses) or letter (narrative syntheses) to act as a unique identifier when constructing the evidence review map and these should be documented in supporting tables (e.g. Table 1a).

Table 1 Example evidence review map supporting tables (a) list of reviews assessed as relevant for inclusion, with review score and the identifier assigned to each individual review (either a number for meta-analyses, or a letter for narrative syntheses), (b) scope of meta-analyses that examine broad questions: region, taxa, MPA characteristic and outcome measure, and (c) scope of narrative syntheses that examine the specific question: broad focus and region.

3. Assess evidence review scope

Constructing an evidence review map requires that each relevant review is systematically categorised according to the question(s) addressed (as defined in step 1) and the type of synthesis undertaken [e.g. narrative/qualitative (which may include limited quantitative analyses) or meta-analyses; see [6] for definitions of each]. Note that multiple questions are often addressed within a single review, and so a single review may be included several times in an evidence review map. The extent to which scope can be objectively categorised is influenced by the methods employed in the review. Whilst a meta-analysis can usually be objectively categorised as addressing a particular question based on whether or not effect sizes are presented, there is no such obvious distinction in many narrative reviews, in which questions could be addressed through varying amounts of text with varying degrees of relevance and supporting references. This problem is exacerbated because the scope of narrative reviews is often broader than meta-analytical reviews. Reliable categorisation of scope is thus possible in greater detail for meta-analyses than for narrative syntheses. The assessment of review scope should therefore be undertaken in two parts, firstly considering reviews that apply meta-analytical techniques and secondly reviews that use narrative synthesis.

Categorisation of meta-analyses as addressing particular questions requires effect sizes to be quoted directly, presented graphically or used in statistical tests of relationships [12]. Instances where relevant terms are included as potential confounding variables but statistics (e.g. effect sizes) are not reported would not be considered as directly addressing a given question [12]. A threshold for the minimum number of primary studies a meta-analysis must contain to be categorised as addressing a particular question could be set. The minimum threshold is highly context-specific (e.g. relating to the quality of primary research, typical effect sizes and variances, etc.) and consequently requires a transparent case-by-case judgement for each evidence review map. Where a threshold is considered appropriate, reviews that do not meet this threshold should be noted as partially addressing the question, thereby allowing articles that are based on a small volume of primary research to be identified.

Categorisation of narrative syntheses should be undertaken wherever possible using the refined questions initially devised in step 1. However, because narrative syntheses often cover a range of topics in varying depths, such fine-scale categorisation may not be possible and so questions may need to be broadened to more accurately reflect narrative review content (e.g. by broad area of focus etc. see Fig. 3; [12]).

Fig. 3
figure 3

(Example adapted from Woodcock et al. [12])

Example a meta-analytical and b narrative evidence review map illustrating the marine protected area review landscape. The matrix should be read using combinations from the top and left headings to form the particular question of interest. Each individual doughnut chart describes the number of reviews addressing a question, and the proportion of reviews that are high (26.5+; black), moderate (13.5–26; grey) and low (≤13; white) scoring. Star symbols represent where one or more reviews have been identified that partially address the particular question. Full details identifying reviews that address a particular question are reported in the supporting tables. Abbreviations in headings refer to: Taxa—Invert invertebrates MPA Char MPA characteristics, outcome measures—Abund abundance, broad focus—BD biodiversity, Fish fisheries

4. Critically appraise the methods of each evidence review using a standardised protocol

The assessment of review methodology forms the penultimate stage in evidence review map development. A standardised protocol designed to assess the reliability of environmental evidence reviews should be utilised to critically appraise the methodological rigour of each relevant review in a consistent manner. For this purpose we recommend the Collaboration for Environmental Evidence Synthesis Assessment Tool (CEESAT; [19]). The current version of CEESAT (available at http://www.environmentalevidence.org/review-appraisals) consists of 13 criteria relating to the reliability (combining objectivity, transparency, and comprehensiveness) of reviews (see [19] for details), and achieves good repeatability when independent assessments of the same review are compared [6, 12, 19]. For each criterion, reviews receive 3 points, 1 point, or 0 points. Scores therefore range from 0 to 39: the higher the score, the greater the confidence that the review methodology is robust and reliable in terms of repeatability and risk of bias. Importantly, while certain criteria within CEESAT require statistical analysis to score highly, points for these criteria are available to narrative syntheses [6, 19]. Furthermore, high scores are available equally to narrative syntheses and meta-analyses for most of the criteria enabling narrative syntheses to be assessed as having high reliability (a score of 26.5+) where appropriate (e.g. see [6]). Reviews should be independently scored by two assessors and the repeatability of the assessment evaluated with a weighted kappa test of agreement [24, 25] or similar to take into account the magnitude of any disagreements, e.g. a 1-0 disagreement is ranked as magnitude 1, whereas a 3-0 disagreement is ranked as magnitude 3 [10, 28]. Disagreements between assessors should be discussed and where these reflect uncertainty over whether or not a criterion was met, the average score from the two assessors should be used.

There are a number of possible approaches to interpreting CEESAT scores (see [19] for further discussion) however, we currently recommend dividing total CEESAT scores into three categories 0–13, 13.5–26 and 26.5+, loosely representing low, intermediate/moderate and high methodological reliability. The boundaries for these categories reflect an average score across the 13 criteria of 0–1, 1–2 and 2–3. Note that while these boundaries may change as further guidance on scoring interpretation becomes available, or if certain aspects of review conduct are prioritised by those conducting an evidence review map, the methodology for incorporating scores into evidence review mapping will remain valid.

5. Construct the evidence review map

Finally, using information from steps 3 and 4, a series of evidence review maps may be constructed to visually represent the review landscape for the overall question of interest (Fig. 3). Separate maps should be constructed to describe meta-analyses and narrative reviews to ensure similar levels of objectivity in review categorisation within each map. Evidence review maps should be constructed using refined questions as defined in step 1 for meta-analyses and those determined in steps 1 and/or 3 for narrative reviews. Note that for the example here, none of the narrative syntheses provided sufficient information to score highly when assessed with CEESAT (e.g. see [12]) and, as a consequence, the narrative evidence review map shows all reviews to be of low reliability (Fig. 3b). This reflects the specific evidence base for MPA effectiveness rather than being a consequence of differences in the way in which CEESAT evaluates narrative syntheses vs. meta-analyses [12].

Evidence review maps consist of a matrix that combines information on the number of reviews addressing a given question and the methodological rigour of each review, enabling end-users to see what evidence there is on the question(s) they are interested in. The matrix overview is supported by a series of tables that allow the most rigorous reviews on each question to be identified. The matrix should be read using combinations from the top and left headings to form a particular question. Doughnut pie charts can be created to represent (1) the total number of reviews that address each individual question (included in the centre of the doughnut pie) and (2) the proportion of those reviews that are of high, medium or low methodological reliability. Symbols should be used to identify where reviews have been categorised as partially addressing a particular question (due to the threshold for number of included primary research articles not being met). The format of the matrix means that some questions will not be applicable; these areas should be left blank. Full details of reviews included for each specific question, together with details of any reviews that partially address a given question should then be provided in a series of supporting tables (e.g. see Table 1b, c; [12]).

Supporting tables should include: (1) details for the search strategy; (2) a list of relevant reviews with their unique identifier and review score; (3) a list of excluded studies with reasons for exclusion; and (4) a series of tables detailing the meta-analyses and narrative syntheses examining each refined question, designed to direct end-users to the most relevant and rigorous review for their requirements.

Discussion

Understanding the reliability of an evidence base is central to effective decision-making and developing mechanisms for communicating this to decision-makers is therefore essential. While systematic review methodology is considered a key tool for unbiased evidence synthesis, the reliability of evidence reviews will continue to vary for many reasons [18]. With the number of reviews of all types continuing to increase, evidence review maps provide the opportunity to visualise the review landscape for an overall question of interest and to guide non-specialists to more relevant and reliable reviews. We consider evidence review mapping a complementary approach to systematic reviews and systematic maps and an important tool for facilitating evidence-based decision-making.

Evidence review mapping relies on systematic searching, transparent decisions on article inclusion and exclusion, objective assessment of review scope and a standardised and repeatable protocol for critically appraising individual reviews. Application of our approach has illustrated the variable scope and reliability of published evidence reviews and the need to ensure non-specialists can locate the most relevant and rigorous reviews on particular questions of interest, as well as indicating how planned reviews can be designed to complement the existing body of reviews [12]. We believe this approach and its outputs will be useful to decision-makers, advisors and knowledge brokers wishing to use evidence in environmental policy and practice, as well as to researchers looking to contribute to the evidence base through targeted evidence synthesis. Our approach to evidence review mapping could be applied widely to many important questions in environmental policy, as an ‘evidence service’ with considerable benefits for research efficiency and evidence-based policy.

Considerations for conducting evidence review maps

While evidence review mapping is a valuable tool, it will pose some challenges to those wishing to construct such maps. Most notably, decisions over whether or not reviews are relevant for inclusion require subjective judgement. This difficulty arises particularly in narrative syntheses, because there is a continuum between studies that exclusively review the findings from relevant primary research versus studies that have a very broad scope or a more conceptual focus and are therefore less appropriate for evidence review mapping. Because most subjective decisions on relevance relate to narrative reviews, altering these decisions would not affect the meta-analytical evidence review map and, while they might adjust the average narrative review score for a given question, they are unlikely to markedly change the conclusions on review rigour and scope. Nonetheless, ensuring transparency of decisions at all stages of evidence review mapping by documentation in the supporting tables is an important component of the methodology to enable end-users to understand and challenge the decisions made over article inclusion and categorisation. Additionally, some methodologically distinct forms of review, such as qualitative syntheses or mixed methods may not be suitable for appraisal using CEESAT and including such reviews will require further research and development.

Evidence review maps rely on the use of a standardised scoring tool to assess the reliability of reviews. Like other scoring tools, CEESAT assesses the likelihood that a review is reliable on the basis of key attributes relating to available evidence, conduct and reporting standards, and does not guarantee the reliability of a review against other factors such as author errors. In addition, total scores of reviews can mask specific strengths or weaknesses across criteria. A breakdown of scoring across individual criteria may therefore be a useful subsequent output to ensure that decision-makers can gauge the extent to which the strengths and weaknesses of a review make it suitable for the intended use (see [19] for a detailed discussion of important caveats in applying and interpreting CEESAT scores). Researchers who wish to undertake evidence review mapping may wish to use alternative boundaries and/or weightings of criteria to represent reliability than those suggested here if certain aspects of review conduct are viewed as particularly important to the end-user. In such instances, clear rationale for amending these boundaries and/or weightings should be provided as part of the evidence review map.

There may be instances in which more than one high scoring review addresses a particular question. In such situations, further assessment could consider the consistency in findings between reviews (noting that direct comparisons of specific results can be misleading if subtle differences in review scope are not identified). If results differ between reviews, potential reasons for ambiguity could then be considered, and further work targeted to examine the evidence base where reasons for discrepancy are unclear. In the latter situation, systematic reviews, containing meta-analytical techniques wherever possible, and/or targeted and well-designed primary research are recommended to ensure that policymaking is informed by reliable evidence that is robust and methodologically rigorous [29].

Finally, note that unlike ‘review of reviews’ that aim to provide a synthesis of evidence from more than one review, evidence review maps do not set out to answer a specific question but rather seek to provide an overview of the existing review evidence base. Consequently, maps are intended to guide decision-makers to relevant information and illustrate strengths and weaknesses in the evidence base, rather than to directly provide policy recommendations or guidelines. Future work that may add value to evidence review maps might include developing user-friendly summaries on included reviews, or reports summarising the findings of the evidence review map together with implications for policy and research.

Conclusions

As the review literature continues to expand, it will become increasingly difficult for non-specialists to locate all relevant evidence reviews. Furthermore, when selecting reviews to inform decision-making, non-specialists may lack the resources to critically appraise all available syntheses and may instead treat all evidence reviews equally, or use measures of review rigour that are questionable and/or subjective (e.g. journal impact factor, citation count, author reputation). However, we, and others (e.g. [6, 9, 12, 30]), have found that published evidence reviews in the environmental sector vary considerably in reliability and scope, which presents challenges to those wishing to undertake evidence-based decision-making. We therefore propose that evidence review mapping represents an important method for communicating the reliability and scope of all reviews on a particular topic to non-specialists, thereby facilitating evidence-based policy and practice in conservation and environmental science.

References

  1. Egger M, Smith GD. Bias in location and selection of studies. BMJ. 1998;316:61–6.

    Article  CAS  Google Scholar 

  2. Schott GD. The reference: more than a buttress of the scientific edifice. J R Soc Med. 2003;96:191–3.

    Article  CAS  Google Scholar 

  3. Seavy NE, Howell CA. How can we improve information delivery to support conservation and restoration decisions? Biodivers Conserv. 2010;19:1261–7.

    Article  Google Scholar 

  4. Cook CN, Carter RW, Fuller RA, Hockings M. Managers consider multiple lines of evidence important for biodiversity management decisions. J Environ Manage. 2012;113:341–6.

    Article  Google Scholar 

  5. Pullin AS, Knight AT, Stone DA, Charman K. Do conservation managers use scientific evidence to support their decision-making? Biol Conserv. 2004;119:245–52.

    Article  Google Scholar 

  6. O’Leary BC, Kvist K, Bayliss HR, Derroire G, Healey JR, Hughes K, et al. The reliability of evidence review methodology in environmental science and conservation. Environ Sci Policy. 2016;64:75–82.

    Article  Google Scholar 

  7. Koricheva J, Gurevitch J, Mengersen K. Handbook of meta-analysis in ecology and evolution. Princeton: Princeton University Press; 2013.

    Book  Google Scholar 

  8. CEE, Guidelines for systematic review and evidence synthesis in environmental management. Version 4.2. Environmental evidence. 2013.

  9. Roberts PD, Stewart GB, Pullin AS. Are review articles a reliable source of evidence to support conservation and environmental management? A comparison with medicine. Biol Conserv. 2006;132(4):409–23.

    Article  Google Scholar 

  10. Shea BJ, Bouter LM, Peterson J, Boers M, Andersson N, Ortiz Z, et al. External validation of a measurement tool to assess systematic reviews. PLoS ONE. 2007;2:e1350.

    Article  Google Scholar 

  11. Philibert A, Loyce C, Makowski D. Assessment of the quality of meta-analysis in agronomy. Agric Ecosyst Environ. 2012;148:72–82.

    Article  Google Scholar 

  12. Woodcock P, O’Leary BC, Kaiser MJ, Pullin AS. Your evidence or mine? Systematic evaluation of reviews of marine protected area effectiveness. Fish Fish. 2017;18(4):668–81.

    Article  Google Scholar 

  13. Holmes J, Clark R. Enhancing the use of science in environmental policy-making and regulation. Environ Sci Policy. 2008;11:702–11.

    Article  Google Scholar 

  14. Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997;126:376–80.

    Article  CAS  Google Scholar 

  15. Chalmers I, Hedges LV, Cooper H. A brief history of research synthesis. Eval Health Prof. 2002;25:12–37.

    Article  Google Scholar 

  16. O’Leary BC, Bayliss HR, Haddaway NR. Beyond PRISMA: systematic reviews to inform marine science and policy. Mar Policy. 2015;62:261–3.

    Article  Google Scholar 

  17. Haddaway NR, Woodcock P, Macura B, Collins A. Making literature reviews more reliable through application of lessons from systematic reviews. Conserv Biol. 2015;29(6):1596–605.

    Article  CAS  Google Scholar 

  18. Ioannidis JPA. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94:485–514.

    Article  Google Scholar 

  19. Woodcock P, Pullin AS, Kaiser MJ. Evaluating and improving the reliability of evidence syntheses in conservation and environmental science: a methodology. Biol Conserv. 2014;176:54–62.

    Article  Google Scholar 

  20. Miake-Lye IM, Hempel S, Shanman R, Shekelle PG. What is an evidence map? A systematic review of published evidence maps and their definitions, methods and products. Syst Rev. 2016;5:28.

    Article  Google Scholar 

  21. Caird J, Sutcliffe K, Kwan I, Dickson K, Thomas J. Mediating policy-relevant evidence at speed: are systematic reviews of systematic reviews a useful approach. Evid Policy. 2015;11:81–7.

    Article  Google Scholar 

  22. Snilstveit B, Vojtkova M, Bhavsar A, Gaarder M. Evidence gap maps—a tool for promoting evidence-informed policy and prioritizing future research. World bank policy research working paper no. 6725. 2013.

  23. McKinnon MC, Cheng SH, Garside R, Masuda YJ, Miller DC. Sustainability: map the evidence. Nature. 2015;528:185–7.

    Article  CAS  Google Scholar 

  24. Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20:37–46.

    Article  Google Scholar 

  25. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1997;33:159–74.

    Article  Google Scholar 

  26. Ojeda-Martinez C, Bayle-Sempere JT, Sanchez-Jerez P, Forcada A, Valle C. Detecting conservation benefits in spatially protected fish populations with meta-analysis of long term monitoring data. Mar Biol. 2007;151:1153–61.

    Article  Google Scholar 

  27. Vandeperre F, Higgins RM, Sanchez-Meca J, Maynou F, Goni R, Martin-Sosa P, et al. Effects of no-take area size and age of marine protected areas on fisheries yields: a meta-analytical approach. Fish Fish. 2011;12:412–26.

    Article  Google Scholar 

  28. Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005;37:360–3.

    Google Scholar 

  29. Pullin AS, Knight TM. Doing more good than harm—building an evidence-base for conservation and environmental management. Biol Conserv. 2009;142:931–4.

    Article  Google Scholar 

  30. Huntington BE. Confronting publication bias in marine reserve meta-analyses. Front Ecol Environ. 2011;9:375–6.

    Article  Google Scholar 

Download references

Authors’ contributions

Led the research: BCO, PW, ASP. Wrote and reviewed the manuscript: BCO, PW, MJK, ASP. All authors read and approved the final manuscript.

Acknowledgements

We thank our potential end-users Ally Dingwall (Sainsbury’s), Tom Pickerell (Seafish, Seafood Watch), Jon Harman (Seafish), Mike Mitchell, David Parker (Young’s Seafood), David Jarrad (Shellfish Association of Great Britain) for their contribution to discussions regarding review reliability. This project was supported in part by a UK Natural Environmental Research Council Knowledge Exchange Grant NE/J006386/1.

Competing interests

The authors declare that they have no competing interests.

Availability of data and materials

All data generated or analysed during this study are included in the published article Woodcock, P., O’Leary, B.C., Kaiser, M.J., Pullin, A.S. (in press) Your evidence or mine? Systematic evaluation of reviews of marine protected area effectiveness. Fish and Fisheries. Doi: 10.1111/faf.12196 (and its Additional files).

Funding

This work was supported in part by a UK Natural Environmental Research Council Knowledge Exchange Grant NE/J006386/1. No other grants from funding agencies in the public, commercial, or not-for-profit sectors were received.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bethan C. O’Leary.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

O’Leary, B.C., Woodcock, P., Kaiser, M.J. et al. Evidence maps and evidence gaps: evidence review mapping as a method for collating and appraising evidence reviews to inform research and policy. Environ Evid 6, 19 (2017). https://doi.org/10.1186/s13750-017-0096-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13750-017-0096-9

Keywords