Skip to main content

A methodology for systematic mapping in environmental sciences


Systematic mapping was developed in social sciences in response to a lack of empirical data when answering questions using systematic review methods, and a need for a method to describe the literature across a broad subject of interest. Systematic mapping does not attempt to answer a specific question as do systematic reviews, but instead collates, describes and catalogues available evidence (e.g. primary, secondary, theoretical, economic) relating to a topic or question of interest. The included studies can be used to identify evidence for policy-relevant questions, knowledge gaps (to help direct future primary research) and knowledge clusters (sub-sets of evidence that may be suitable for secondary research, for example systematic review). Evidence synthesis in environmental sciences faces similar challenges to those found in social sciences. Here we describe the translation of systematic mapping methodology from social sciences for use in environmental sciences. We provide the first process-based methodology for systematic maps, describing the stages involved: establishing the review team and engaging stakeholders; setting the scope and question; setting inclusion criteria for studies; scoping stage; protocol development and publication; searching for evidence; screening evidence; coding; production of a systematic map database; critical appraisal (optional); describing and visualising the findings; report production and supporting information. We discuss the similarities and differences in methodology between systematic review and systematic mapping and provide guidance for those choosing which type of synthesis is most suitable for their requirements. Furthermore, we discuss the merits and uses of systematic mapping and make recommendations for improving this evolving methodology in environmental sciences.


The last decade saw increasing concerns that scientific research was not being used to underpin policy and practice in the fields of conservation and environmental science [17], with decisions generally being experience-based rather than evidence-based [2, 8]. Methods for evidence-based decision-making are more developed in disciplines such as medicine and social science. In these sectors a suite of ‘systematic evidence synthesis’ methodologies have been developed to gather and collate evidence, and sometimes appraise studies and synthesise study results e.g. [911]. Evidence synthesis methods follow rigorous, objective and transparent processes that, unlike traditional literature reviews, aim to reduce reviewer selection bias and publication bias, and enable the reader to view all the decisions made for inclusion and appraisal of research, and how conclusions have been reached. Evidence syntheses are now receiving significant interest in environmental sciences, gaining increasing recognition from research funders e.g. [12, 13]. One of the most recognised evidence synthesis methods is systematic review, which is often regarded as the gold standard [2, 3, 8, 14, 15].

Systematic reviews use existing primary research to, where possible, answer a specific question by combining suitable data from multiple studies, either quantitatively (e.g. using meta-analysis) or qualitatively (e.g. using meta-ethnography) [11, 16]. In environmental sciences ‘meta-analysis’, a powerful statistical tool, is often used in quantitative reviews to combine the results of multiple studies [17]. This improves precision and power through increased effective sample size, and allows additional sources of variability across studies to be investigated [18]. This process of combining the results of multiple studies to answer a question is often called ‘synthesis’ [11]. However, ‘synthesis’ can also be used to describe the methodological process used to gather and collate evidence, which may or may not include extraction of results and combining of study results to answer a question. Here we use the term ‘evidence synthesis’ to describe the whole methodology used to gather and collate evidence (e.g. systematic review, systematic mapping) and the term ‘synthesis of results’ to describe the combining of results from multiple studies either quantitatively or qualitatively to answer a question.

Questions suitable for systematic review are structured to contain a number of key elements; explicit components that specify the essential aspects of a primary research study to be able to answer the review question [19]. In environmental evidence, the most common question type relates to the effects of an intervention or exposure and generally has 4 key elements that need to be specified; population (P), intervention (I) or exposure (E), comparator (C) and outcome (O) commonly referred to the PICO or PECO elements [17]. Other types of question structures exist [20] and may be developed for particular circumstances. For example, the European Food Safety Authority (EFSA) are often interested in questions related to the accuracy of a test method for detection or diagnosis, in which case the population (P), index test (I) and target condition (T) must be specified. This structure is often called a ‘PIT’ question type. For questions regarding the prevalence of a condition, or occurrence of an outcome for a particular population, the key elements are the population (P) and outcome (O), often referred to as ‘PO’ question types [12, 19]. Some examples of PICO, PECO, PIT and PO question types are given in Box 1.

These questions where all the key elements are clearly specified are termed ‘closed-framed’ [19] and help enable systematic review teams to envisage the type of primary research study designs and settings that would be included [12], Sometimes all elements of the question are not explicit in PICO or PECO type questions because the intervention or exposure and comparator elements are considered together, for example when comparing different levels of exposure to a chemical and the effects on the outcome, but these questions are still considered closed-framed [19].

Despite being ‘gold standards’ in evidence synthesis, systematic reviews are not always feasible. The ability of systematic reviews to produce a quantitative answer to a review question using meta-analysis can be hampered by data availability [21]. High quality quantitative data is not always abundant in environmental science [22] and methodological detail and results are often poorly reported, unreported, and/or unrecorded [2325].

Often, multiple options for key question elements (e.g. multiple populations, interventions or exposures) are needed to answer questions. Also, policy-makers frequently ask questions relating to barriers to effectiveness of interventions (e.g. cost of implementation; lack of awareness of intervention) and how these can be overcome. The studies collated for these type of questions are often highly heterogeneous (mixed) including different methodologies and outcomes or a mixture of quantitative and qualitative research. This may make synthesising the results of individual studies (e.g. via meta-analysis), to answer the question, challenging or impossible. In these cases, a means of collating the evidence to identify sub-sets of evidence or questions suitable for systematic review would be beneficial, particularly where the evidence base is extensive [11, 16].

Questions posed by user groups in policy and practice are sometimes ‘open-framed’ (questions that lack specification of some key elements) and may not readily translate into closed-framed questions suitable for systematic review. Decision makers often ask questions relating to the state of evidence on a topic: How much evidence is there? Where is the evidence? What interventions or exposures have been studied? Which outcomes have been studied? How have the studies been undertaken? An example question relevant to environmental sciences might be: ‘What are ‘integrated landscape approaches’ and where and how have they been implemented in the tropics?’ (adapted from [26]). For this type of question it is difficult to define inclusion criteria for specific key elements (to decide what studies are relevant) and an iterative approach may have to be taken. The evidence gathered may be used to inform the development of new theories, conceptualisations or understandings [11, 16, 26]. In environmental sciences, a method of collating studies to address these types of question is often needed.

Sometimes the aim of collating evidence may be to inform secondary synthesis other than systematic review. For example, to gather data for modelling [27]. Stakeholders may also be interested in research activity already captured in existing systematic reviews either to ask questions about the nature of the research field or to identify primary research that could be used in further secondary synthesis [11]. Again, this highlights the need for a means of cataloguing all the available evidence in a comprehensive, transparent and objective manner to describe the state of knowledge, identify sub-sets of evidence or topics suitable for further secondary synthesis or identify where there is a lack of evidence.

In the social sciences, ‘systematic mapping’ methodology was developed in response to the need to adapt existing systematic review methodology for a broader range of circumstances including some of those mentioned above [10, 2830].

Systematic mapping does not aim to answer a specific question as does a systematic review, but instead collates, describes and catalogues available evidence (e.g. primary, secondary, quantitative or qualitative) relating to a topic of interest [10]. The included studies can be used to develop a greater understanding of concepts, identify evidence for policy-relevant questions, knowledge gaps (topics that are underrepresented in the literature that would benefit from primary research), and knowledge clusters (sub-sets of evidence that may be suitable for secondary research, for example using systematic review) [10, 11, 3032].

Systematic mapping follows the same rigorous, objective and transparent processes as do systematic reviews to capture evidence that is relevant to a particular topic, thus avoiding the potential pitfalls of traditional literature reviews (e.g. reviewer and publication bias). However, since systematic mapping is not restricted by having to include fully specified and defined key elements, it can be used to address open-framed or closed-framed questions on broad or narrow topics. Systematic mapping is particularly valuable for broad, multi-faceted questions relating to a topic of interest that may not be suitable for systematic review due to the inclusion of multiple interventions, populations or outcomes or evidence not limited to primary research. Systematic maps play an important role in evidence syntheses because they are able to cover the breadth of science often needed for policy-based questions [33].

In systematic mapping, the evidence collated is catalogued, usually in the form of a database, providing detailed ‘meta-data’ (a set of data that describes and gives information about other data) about each study (e.g. study setting, design, intervention/s, population/s) and the article it appears in (e.g. author, title, year, peer review journal, conference proceeding). These meta-data are used to describe the quantity and nature of research in a particular area. For example, the number of articles published in journals, books, conferences; the number of publications per year; the number of studies from each country of origin; the type and number of interventions; type and number of different study designs (e.g. survey, randomised controlled trial (RCT), cohort study); the population types (e.g. species studied). As systematic maps may include multiple populations, interventions or exposures, or outcomes (e.g. the number of studies investigating the effectiveness of a specific intervention, for a particular outcome in a specific population), more complex cross-tabulations can also be carried out. By interrogating the meta-data it is possible to identify trends, knowledge gaps and clusters. In further contrast with systematic reviews, systematic maps are unlikely to include extraction of study results or synthesis of results. To date those published within social science disciplines also exclude critical appraisal of included studies [10]. Table 1 outlines the key differences between systematic review and systematic mapping.

Table 1 Differences between a systematic map and systematic review

History of systematic mapping

Methodology for systematic mapping was originally developed by the Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre) [28, 29]. These systematic maps, sometimes termed ‘descriptive maps’, are often used in a two-stage model of systematic review as a means of initially characterising the evidence base, followed by the identification of smaller sub-sets of studies that can be used to answer focused questions through systematic review [28, 29, 34].

The EPPI-Centre mapping methodology was subsequently adapted by the Social Care Institute for Excellence (SCIE) in response to a lack of empirical data to answer specific questions using systematic review methodology, and a need for methodology to describe the literature in a broad field of interest [10]. SCIE term this methodology ‘systematic mapping’ and have developed detailed guidance for reviewers [10]. It is this guidance, used together with completed SCIE systematic maps [3537], that was first used to pilot systematic mapping for use in environmental science [38] and that provides the framework for methodology described in this paper.

There are a number of variations in terminology relating to systematic mapping in different disciplines and these are detailed in Box 2.

In environmental sciences, systematic mapping is receiving increasing attention as a methodology in evidence synthesis e.g. [13, 33, 44], but as yet is only briefly discussed in current CEE systematic review guidance [17].

Here, we describe a framework and recommendations for undertaking systematic mapping of environmental research based on guidance developed by SCIE [10], reviewer experience in undertaking systematic maps in environmental sciences, and lessons from different mapping approaches used in other disciplines.

Methodological framework for systematic mapping

Before commencing any review, it is important to establish a team who will be involved throughout the review process [10, 17]. The review team should ensure that they have adequate means of searching multiple sources for relevant published and unpublished literature (e.g. access to relevant bibliographic literature databases, web-based search engines, websites of specialist organisations) and accessing full texts (e.g. subscriptions to relevant journals, adequate funds for interlibrary loans), as a comprehensive and unbiased search is essential to the systematic mapping process. Systematic mapping is conducted in sequential stages (Fig. 1). The first stages (1–3) generally follow those of CEE systematic review guidance [17]. Following screening and full text retrieval, however, stages are cut short, since no synthesis of study results is undertaken. Instead, a database is populated with study meta-data using predefined categories assigned to each study for a suite of variables that describe the study’s setting and design. This process is termed ‘coding’ [10].

Fig. 1
figure 1

Stages in the systematic mapping process

For systematic mapping in environmental sciences, the major divergence between the guidance herein and systematic mapping methodology described by SCIE [10] is in the optional inclusion of a ‘critical appraisal’ stage following coding. It may be advantageous to include this a priori defined stage to assess the reliability of the evidence base in whole or in part, and to help identify sub-topics or questions that may be suitable for further secondary synthesis (e.g. systematic review). It must be pointed out, however, that any critical appraisal carried out in systematic maps should be viewed with caution when considering any secondary synthesis. For example, external validity of studies may not have been assessed in the systematic map, but this important aspect of appraisal is required for systematic review. Furthermore, where users are interested in taking a sub-set of evidence from the systematic map to be used to address a systematic review question, critical appraisal on specific important aspects of methodology may be required that were not undertaken as part of the systematic map. Following coding and optional critical appraisal, meta-data in the map database are used to describe the evidence base in a narrative synthesis (the results text within the systematic map report). The key benefits and outputs of a systematic map are given in Box 3 and described in more detail in each stage of the systematic mapping process below.

In the following pages we set out a stage-by-stage framework for the systematic mapping process. Key definitions used in systematic mapping are given in Box 4.

Stage 1

Establishing a review team and engaging stakeholders

As for systematic review [17], a review team should be established for a systematic map. The team should include members that have the necessary knowledge and skills required to carry out the systematic map [11]; for example, knowledge of the topic or disciplines included, and skills for literature searching and coding. Establishment of a team is also needed for any quality assurance carried out in the systematic map, since this should involve more than one reviewer (e.g. for the quality assurance of screening and coding of studies). The review team would benefit from being led by an experienced project manager who is responsible for managing tasks, people and resources involved in the systematic mapping process.

The composition of review teams are likely to be similar to those for systematic review teams, although as no synthesis of results takes place, there is unlikely to be a requirement for specialist statistical expertise within the team. Instead experts in databases may offer value.

There are distinct benefits to setting the scope of a systematic map in collaboration with stakeholders, and reviewers should attempt to solicit interest from a representative group of relevant stakeholders. Stakeholders may be consulted for their expertise to help shape the scope and ensure the relevance of the systematic map [11]. They may also have commissioned the systematic map. Systematic maps may be of potential interest to a wide range of stakeholders, including policy makers, practitioners, non-governmental organisations, levy boards, scientists and research funding bodies e.g. [26, 4552].

It should be noted that stakeholders may have a strongly vested interest in the topic and care must be taken to avoid any resultant bias to the systematic mapping process. The systematic map must state clearly who was involved in the process and funders must be declared to provide transparency to the reader.

Setting the scope and question

Firstly, the review team must consider the scope of the topic and the aim of the question to decide whether systematic mapping is the most appropriate approach. When setting the scope of the systematic map it is sometimes useful to develop a conceptual framework or model (visual or textual) to outline what is to be explored by the map e.g. [45, 46]. This makes explicit the assumptions and mechanisms that provide the background to the map, and can help test the suitability of the topic being addressed for the commissioner’s or stakeholder’s needs [13].

The review team should consider the following questions:

Is the aim of the question to

  • Describe the current state of knowledge for a topic or question rather than answering a question through ‘synthesis of results’?

  • Discover: how much evidence there is? What populations, interventions, exposure or outcomes have been studied? How studies have been carried out?

  • Gather and collate evidence to identify suitable topics or sub-groups of evidence that may be suitable for further secondary research and knowledge gaps for primary research?

Is the scope of topic

  • Multi-faceted and likely to collate very heterogeneous studies that would make synthesis of results using systematic review challenging or difficult?

  • Narrow but includes multiple options for key elements and is therefore likely to gather very heterogeneous studies?

  • Likely to be supported by an extensive evidence base that would benefit from initial characterisation to identify sub-topics or sub-groups of evidence for further secondary research?

If the answer to any of these questions is ‘yes’ then a systematic mapping approach should be considered.

Question formulation follows a similar procedure as for systematic reviews [17] (i.e. PICO, PECO, PIT or PO formulae) alternatively the question may be more open-framed where, for example, it is not known what interventions or outcomes have been studied or how the studies have been undertaken. Box 5 shows published examples of systematic mapping questions.

Setting inclusion criteria for studies

Establishing the inclusion criteria for systematic maps is similar to that for systematic reviews [17]. Criteria should be set in consultation with stakeholders where possible and considerable effort should be expended in ensuring they are appropriate and well-defined, since they form the backbone of the systematic map. The review team must decide on the extent to which criteria are pre-specified or developed during the mapping process and this will depend on the type of question asked.

Inclusion criteria may be decided by splitting the map into its key elements, as in systematic review (e.g. PICO, PECO, PIT or PO), and may be broad or narrow depending on the breadth and depth of the question.

Systematic maps are potentially less limited in the types of evidence that may be included than systematic reviews because no synthesis of study results is undertaken. Systematic maps can include a wide range of research (e.g. primary, secondary, theoretical, economic) and study designs (e.g. experimental, quasi-experimental or observational). The chosen approach for inclusion of studies should be detailed in the protocol and the type of evidence clearly documented in the map database.

Scoping study

Scoping (sometimes referred to as a ‘pilot study’) is a vital part of systematic reviews [17] and the process should not differ for systematic maps. Scoping can be seen as a ‘trial run’ of the full systematic map, and helps to shape the planned method for the review and inform development of the protocol. In scoping, the search strategy is tried and tested, the number of results found is recorded (typically from searches in just one academic database), and screening is undertaken on a subset of search results to assess proportional relevance at title, abstract and full text levels. Trialling the search strategy in scoping can help reviewers to find an appropriate balance between sensitivity (getting all information of relevance) and specificity (the proportion of articles that are relevant). If the search strategy is too sensitive and not specific enough, the search may return a large amounts of not only relevant but also irrelevant information that is too extensive to screen within reasonable time and resource limits; too specific and not sensitive enough and the search strategy may miss vital evidence. Sometimes the scoping stage may help identify whether a systematic map or a full systematic review is the most appropriate method to address a question. For example, a decision on the most appropriate approach to be taken may be influenced by the amount and type of evidence found during the scoping stage. If this is the case, once the scoping stage is completed it must be specified a priori whether a systematic map or systematic review will be conducted.

Protocol development and publication

The systematic map protocol takes a similar format to that of a systematic review protocol, and should detail the approach that will be taken for all stages of the mapping process. The systematic map protocol is submitted, peer-reviewed and published in the same way as for a CEE systematic review [17]. Planned outputs (usually in the form of freely accessible databases) should be written into the protocol. If, for unforeseen reasons, a change in methodology is needed then these differences from the protocol must be clearly stated and detailed in the final systematic map report.

Stage 2

Searching for evidence

Searching for evidence and recording the methods for searching and the numbers of articles captured within a systematic map follows the same procedures as within a systematic review [17]. The methods used for searching for evidence should be documented a priori in the protocol, with any variation recorded in the systematic map report. As with systematic review, the search for literature should aim to be as comprehensive as possible, for example using (but not limited to) relevant bibliographic databases, web-based search engines, websites of specialist organisations, bibliographies of relevant reviews, and targeted calls for evidence using professional networks or public calls for submission of articles (e.g. via Twitter). In some cases, systematic maps may return a greater volume of evidence than would be expected for systematic reviews, since systematic maps can address questions that may be multi-faceted, relating to broad topics that aim to gather a wide range of evidence types.

Stage 3

Screening evidence

Screening of search results (also referred to as ‘study inclusion’) against inclusion criteria proceeds in systematic maps in the same way as in systematic reviews: via title, abstract and full text screening stages [17].

Where articles appear to be relevant but full texts cannot be obtained (e.g. the conference proceedings the article is published in is unavailable) it may be useful to include them within systematic maps as their inclusion can contribute to the overall state of knowledge. There are many reasons why full text may not be obtainable (e.g. the study may not have been published; the reviewer may be unable to access conference proceedings or contact study authors; the published article is no longer available on a website). Studies that have not been obtained in any full text articles should be categorised separately to full text articles. Where systematic maps identify potentially relevant but unobtainable articles it may be beneficial to include two databases, one of relevant abstracts and one of meta-data extracted full texts [49]. Only studies with suitable available meta-data can be carried forward to the critical appraisal stage, if this is undertaken.

In recent years, text mining technologies have been developed to reduce screening workload (especially in large, complex evidence bases) and prioritise records for manual screening [39]. Text mining software is readily available, is included in some systematic review management software, such as EPPI-Reviewer [53], and may prove particularly useful for rapidly coding information from within large evidence bases such as those typically identified by systematic mapping.

As with systematic reviews, it is good practice for the screening process in systematic maps to be checked for consistency and clarity between multiple review team members [10] (e.g. using a Kappa analysis) as described in CEE guidance [17], with team members discussing and resolving any ambiguities.

A record of the screening process, with numbers of articles excluded at each stage and reasons for exclusion for full texts should be included with the systematic map report for transparency. For example, using a similar template to that provided in CEE guidance for systematic review [17] or a PRISMA-type flow chart [54]. This information can be provided as supplementary material, published alongside the systematic map.

Stage 4


In systematic reviews, data extraction includes both meta-data (information describing the study and its methods) and study qualitative and/or quantitative results. In systematic maps, data extraction may consist only of meta-data e.g. [38, 47]. As stated above, the process of assigning categories to each study for a suite of variables that describe the study setting and design is referred to as coding [10]. Coding is carried out for a combination of generic (e.g. author, title, year of publication, publication type, data source type, data type) and topic-specific (e.g. intervention/s, population/s, length of study, sampling strategy) fields describing the study setting (Table 2), which will later be collated into a systematic map database. The mapping process is designed to create a useful and structured resource that provides sufficient detail of studies to be of use in future work. Whilst coding may be undertaken in systematic reviews, it is likely to be more extensive in systematic maps, where it is designed to be an output in itself.

Table 2 Examples of coding variables for systematic maps

Deciding what information to include in a systematic map database can be a challenge. Systematic maps may be more widely useful if they detail a broad range of aspects of study designs and settings, but resources may not allow this, particularly for large volumes of evidence. A balance should therefore be struck between utility and available resources, with the information that is most relevant to the systematic map question prioritised for coding.

The most basic map would consist of a list of unique studies rather than articles. An article is the published format in which authors present their research. A study is the unique investigation. The study unit is difficult to define. In some cases this may be a geographical area: in others it may be a unit in time. Four key variables help to define a study; the researchers, the location, the time, and the method. Which of these variables are used as cut-offs and where those cut-offs may be is a decision for the review team.

In systematic maps, coding is typically based on information from full text articles, since many essential details required for screening must be gained from the complete text, and abstract quality is extremely variable across the evidence base [55], although some basic coding (e.g. generic and some topic specific coding, such as intervention/s studied where it can be determined) may be undertaken for studies included at title or abstract stage e.g. [49].

It is important to consider the level of detail recorded for study design should any form of critical appraisal be planned. Study results are not usually summarised in systematic maps reports as no synthesis of results is undertaken, and collating results may encourage vote-counting (where the number of statistically significant results for and against a hypothesis are counted and weighed against each other) which is actively discouraged by CEE [17]. However, authors may decide in some cases to include data relating to results within the database e.g. [49], since this may facilitate future analyses in a full systematic review. In these cases the authors should explicitly state the limitations of the data to guide appropriate interpretation.

The coding tool (the list of meta-data variables to be extracted) and the categories that will be assigned, should be developed with expert assistance and subjected to peer-review within the protocol. Coding can often be complex and it is advisable to pilot the process before the protocol is completed, to ensure that coding is objective, repeatable and adequately reflects the content of the studies.

Software can assist coding, and can be used as long as it facilitates the production of one or more searchable databases. For example, EPPI-Reviewer software [53] designed for systematic reviews in the social sciences lends itself well to coding within systematic maps, since it allows codes to be assigned to full-text electronic articles using a tick-box style process.

At present, no standards exist in environmental sciences for how many reviewers should carry out coding within systematic maps, and planning for coding is likely to depend on available resources. As a guide, SCIE standards for data extraction recommend the independent coding of all records by at least two people. Once the coding has been completed a random sample of 20 % of papers are separately coded for quality assurance by an assessor independent of the project team [10].

Production of the systematic map database

The included studies and their meta-data can be presented within one or more databases. Where possible, it is strongly recommended that these databases are searchable e.g. [48, 49]. This facilitates interrogation by end-users, who may, for example, want to explore a wider range of questions relating to the map and identify relevant sub-sets of evidence.

A database is any organised collation of data. Databases are managed by database management systems (DBMS), such as Microsoft Access and Microsoft Excel. Some software is more user-friendly, whilst other software is more powerful. The choice of DBMS is entirely up to the reviewers, but consideration should be made regarding accessibility of software for users, and of ease of use. Help files can be vital resources in detailing for end users how to access and interrogate the systematic map database e.g. [38].

Reviewers may choose to create more than one database from a systematic map to provide varying levels of information. For example, a database of all included studies with basic meta-data (including potentially relevant studies for which no full text articles are available) provides users with a basic overview of the state of evidence. A second database containing only full text articles that have additional coding fields provides additional value, since, although it contains fewer included studies, it may be of greater use in supporting decision-making, particularly if it can be coded to inform critical evaluation. See [49] for example.

It is important to retain a high degree of clarity across any databases produced in order to go beyond a simple list of citations. Where multiple articles discuss one study or where studies appear to be linked, they should be highlighted in the database e.g. [48, 49], particularly where studies have been reported in multiple articles where dual publication can risk double-counting within a map. This helps to avoid double counting of study results in future syntheses that might miss linkages between study lines in the database.

Stage 5

Critical appraisal (optional)

Critical appraisal within systematic mapping is a useful tool to investigating the overall validity of the evidence base or subsets of evidence, something that may be specified by stakeholders commissioning reviews. Critical appraisal in systematic maps e.g. [4749] is optional, however, since there is no synthesis of results and it is difficult to assess external validity (generalisability) when a question has not been explicitly specified as with a systematic review. Critical appraisal for systematic mapping may follow the processes outlined for systematic review [17], and should only be undertaken for studies using full-text articles where a sufficient level of detail in study methods is provided.

Since systematic maps are often designed to provide an overview of all evidence relating to a topic, they may include a wide variety of different types of study, some of which would normally be excluded from more focussed systematic reviews. In these cases, critical appraisal may be particularly useful where inferences regarding the ‘robustness’ of different aspects of the evidence base can be made and used to complement conclusions regarding the volume of evidence.

Stage 6

Describing the findings

The systematic map database can be used to describe the scope of the research and identify knowledge clusters and gaps. The map can be interrogated by users allowing them to find information relating to any chosen combinations of subsets of the meta-data. Simple numerical accounts of frequencies in each category (e.g. the number of studies investigating a particular species) and more complex cross-tabulations (e.g. number of studies investigating the effectiveness of a particular intervention, in a specific farming system for a named species) enable correlations, trends, gaps and clusters to be identified e.g. [38, 4649].

A systematic map report should describe the evidence base in a similar way to the descriptive statistics section in a systematic review (see report production section below). Authors usually start by describing simple generic (e.g. number of studies per year, country, publication type) and study-specific trends (e.g. number of studies per intervention, population, outcome, study design) before describing more complex, in depth analysis of the evidence base e.g. [38, 4649]. Compared to systematic review, systematic maps may put more emphasis on describing the evidence, since this is the primary objective of the map.

Visualising the findings

Pivot tables and pivot charts are useful ways of easily visualising the quantity (and quality if assessed) of evidence across a suite of meta-data variables e.g. [38]. It may be suitable and useful to present study meta-data as a layer within a geographical information system (GIS). This may be a simple world map showing the location and number of included studies e.g. [48] or a more complex interactive world map which also enables the reader to select studies from sub-topics of interest and access study meta-data e.g. [50, 51]. This can easily be undertaken using online tools, such as Google Maps, if all study lines in an Excel database have latitude and longitude associated with them. Such visualisations are relevant to systematic maps with a global or large-scale scope, where geographical distribution of study effort and type may be particularly interesting.

Other useful forms of visualisation include two-dimensional figures and tables e.g. [46, 47]. Such visualisations can show, for example, the number of studies, critical appraisal findings or sample size across countries, outcomes, populations or covariates. Categorical variables can be included in these visualisations as additional dimensions.

We anticipate that novel ways of visualising systematic map data will be developed and adapted as systematic mapping becomes more widely used.

Report production and supporting information

All systematic maps should involve a full written report that accompanies the systematic map database. This report documents the methods used in the mapping process in a transparent, objective and repeatable manner. The systematic map report should follow the same basic format as for a systematic review report [17] and include stages specific to systematic maps in the methods, with all activities clearly justified and explained in detail.

Reporting of specific details (such as search string modification for individual academic databases, search dates and numbers of results) can be documented within supplementary information, as with systematic reviews. CEE requires that the report be accompanied by a list of excluded articles assessed at full text with reasons for exclusion [17]. The database file should be provided in a clear and readily digestible format as a supplementary file that is uploaded and published alongside the final systematic map report. Database files may be accompanied by help files, again in supplementary information.

In general, a narrative report would include:

  • Background and rationale for the systematic map as in systematic review.

  • Clear, transparent detail of the methodology following that for systematic review but including systematic map specific stages.

  • A description of the volume and characteristics of the evidence base, including generic (e.g. geographical location, publication source) and study-specific trends (e.g. the number and type of population and interventions studied and outcomes measured) as well as describing more complex and in depth analysis of trends in the evidence base.

  • (Where critical appraisal is included.) A description of the evidence to include relative reliability of subsets of studies. A description of whether the evidence within each study is consistent, contested or mixed may also be included.

  • Recommendations for primary research based on knowledge gaps that have been identified, and recommendations for secondary research in relation to knowledge clusters.

  • Priorities and scope for future systematic review based on the available evidence and policy/practice needs.

  • Implications for research, policy and practice.

Systematic maps and the wider evidence base

The main aim of a systematic map is to collate and catalogue a body of evidence to describe the state of knowledge for a particular topic or question. This catalogue (the database) forms a searchable resource that is published alongside the systematic map report and can be interrogated to allow users to subset studies based on any of the measured meta-data variables.

Databases not only facilitate user interaction with the outputs of systematic map reports, but also updating (as new evidence is published) and upgrading (proceeding from a systematic map to full systematic review).

The map database allows researchers to identify areas of the evidence base that are sufficiently represented to allow meaningful systematic review. Using a systematic map as the basis for a systematic review should be a relatively rapid process, since often collation and coding of all available relevant evidence has already been performed (although sometimes additional, more focused searches may be required). The extension of a subtopic into a separate systematic review would involve: drafting of a systematic review protocol; selection of relevant studies from the systematic map; updating searches to capture research published since the original search; collection of full text articles; full critical appraisal for study internal and external validity (reviewers should not assume that critical appraisal carried out on studies in systematic maps is sufficient or appropriate for systematic review) extraction of quantitative or qualitative data; synthesis of results using appropriate quantitative or qualitative methodology where possible; and drafting of a systematic review report. The original systematic map database may also be included for additional value.

Policies for registering, planning and undertaking a CEE systematic review or map update are under development [56], and are likely to be equally as relevant to extensions of systematic maps into systematic reviews. However, updating systematic maps is likely to depend on availability of funds and interest of stakeholders.

Systematic maps may also be used to identify evidence for other secondary research purposes other than systematic review, for example modelling e.g. [24] and ‘synopses of conservation evidence’ [38, 57].

As with systematic reviews, systematic map reports may identify deficiencies in the evidence regarding study methods. These deficiencies can allow reviewers to make recommendations of changes in practice, or highlight the need for improved funding to allow more accurate measurements to be taken.


Systematic maps are a novel evidence collation method in environmental sciences. They offer a reliable means of summarising and describing the broad bodies of evidence pertaining to a specific topic and are particularly useful where systematic review may be unsuitable. Here, we have described a methodology and proposed standards for undertaking a systematic map and discussed the various options available. Systematic maps are likely to become a common method in evidence synthesis as a result of their broad relevance and usability.


  1. Pullin AS, Knight TM, Stone DA, Charman K. Do conservation managers use scientific evidence to support their decision making? Biol Conserv. 2004;119:245–52.

    Article  Google Scholar 

  2. Sutherland WJ, Pullin AS, Dolman PM, Knight TM. The need for evidence-based conservation. Trends Ecol Evol. 2004;19:305–8.

    Article  Google Scholar 

  3. Pullin AS, Knight TM. Doing more good than harm—Building an evidence-base for conservation and environmental management. Biol Conserv. 2009;142:931–4.

    Article  Google Scholar 

  4. Sunderland T, Sunderland-Groves J, Shanley P, Campbell B. Bridging the Gap: how can information access and exchange between conservation biologists and field practitioners be improved for better conservation outcomes? Biotropica. 2009;41:549–54.

    Article  Google Scholar 

  5. Cook CN, Carter RW, Fuller RA, Hockings M. Managers consider multiple lines of evidence important for biodiversity management decisions. J Environ Manage. 2012;113:341–6.

    Article  Google Scholar 

  6. Department for Environment Food and Rural Affairs. Defra’s evidence investment strategy: 2010–2013 and beyond. London: Defra; 2013.

    Google Scholar 

  7. Matzek V, Covino J, Funk JL, Saunders M. Closing the knowing-doing gap in invasive plant management: accessibility and interdisciplinarity of scientific research. Conserv Lett. 2014;7:208–15.

    Article  Google Scholar 

  8. Pullin AS, Knight TM. Effectiveness in conservation practice: pointers from medicine and public health. Conserv Biol. 2001;15:50–4.

    Article  Google Scholar 

  9. Campbell Collaboration. Campbell collaboration systematic reviews: policies and guidelines version 11 Oslo, Norway, 2015. Campbell Systematic Reviews. 2015. doi: 10.4073/csr.2015.1. Accessed 19 Oct 2015.

  10. Clapton J, Rutter D, Sharif N. SCIE Systematic mapping guidance; April 2009. Accessed 19 Oct 2015.

  11. Gough D, Oliver S, Thomas J. An introduction to systematic reviews. London: Sage Publications Ltd; 2012.

    Google Scholar 

  12. EFSA. Application of systematic review methodology to food and feed safety assessments to support decision making: EFSA guidance for those carrying out systematic reviews. EFSA Journal. 2010;8(6):1637.

    Google Scholar 

  13. Collins A, Miller J, Coughlin D, Kirk S. The production of quick scoping reviews and rapid evidence assessments: A how to guide. Joint Water Evidence Group Beta Version 2; April 2014. Accessed 19 Oct 2015.

  14. Fazey I, Sailsbury JG, Lindenmayer DB, Maindonald J, Douglas R. Can methods applied in medicine be used to summarize and disseminate conservation research? Environ Conserv. 2004;31:190–8.

    Article  Google Scholar 

  15. Segan DB, Bottrill MC, Baxter PW, Possingham HP. Using conservation evidence to guide management. Conserv Biol. 2010;25:200–2.

    Article  Google Scholar 

  16. Gough D, Thomas J, Oliver S. Clarifying differences between review designs and methods. Syst Rev. 2012;1:28.

    Article  Google Scholar 

  17. Collaboration for Environmental Evidence. Guidelines for systematic review and evidence synthesis in environmental Management. Version 4.2. Environmental Evidence 2013 Accessed 19 Oct 2015.

  18. Stewart G. Meta-analysis in applied ecology. Biol Lett. 2010;6:78–81.

    Article  Google Scholar 

  19. Aiassa E, Higgins JPT, Frampton GK, Greiner M, Afonso A, Amzal B, Deeks J, Dorne JL, Glanville J, Lövei GL, Nienstedt K, O’Connor AM, Pullin AS, Rajić A, Verloo D. Applicability and feasibility of systematic review for performing evidence-based risk assessment in food and feed safety. Crit Rev Food Sci. 2015;55(7):1016–34.

    Article  Google Scholar 

  20. Booth A. Formulating answerable questions. In: Booth A, Brice A, editors. Evidence-based practice: an information professional’s handbook. London: Facet; 2004. p. 61–70.

    Google Scholar 

  21. Haddaway NR. A call for better reporting of conservation research data for use in meta-analyses. Conserv Biol. 2015;29(4):1242–5.

    Article  Google Scholar 

  22. Newton AC, Stewart GB, Diaz A, Golicher D, Pullin AS. Bayseian belief networks as a tool for evidence-based conservation management. J Nat Conserv. 2007;15:144–60.

    Article  Google Scholar 

  23. Pullin AS, Salafsky N. Save the whales? Save the rainforests? Save the data! Conserv Biol. 2010;24:915–7.

    Article  Google Scholar 

  24. Haddaway NR. Maximizing legacy and impact of primary research: a call for better reporting of results. Ambio. 2014;43(5):703–6.

    Article  Google Scholar 

  25. Haddaway NR, Verhoeven JTA. Poor methodological detail precludes experimental repeatability and hampers synthesis in ecology. Ecol Evol. 2015;5(19):4451–4.

    Article  Google Scholar 

  26. Reed J, Deakin L, Sunderland T. What are the ‘integrated landscape approaches’ and how effectively have they been implemented in the tropics: a systematic map protocol. Environ Evid. 2015;4:2.

    Article  Google Scholar 

  27. Gathman A, Priesnitz KU. What is the evidence on the inheritance of resistance alleles in populations of lepidopteran/coleopteran maize pest species: a systematic map protocol. Environ Evid. 2014;3:13.

    Article  Google Scholar 

  28. Peersman G. A descriptive mapping of health promotion in young people, London: EPPI-Centre, Social Sciences Research Unit, Institute of Education, University of London; 1996. Accessed 19 Oct 2015.

  29. Oakley A, Gough D, Oliver S, James T. The politics of evidence and methodology: lessons from the EPPI-Centre. Evid Policy. 2005;1(1):5–31.

    Article  Google Scholar 

  30. Bates S, Clapton J, Coren E. Systematic maps to support the evidence base in social care. Evid Policy. 2007;3:539–51.

    Article  Google Scholar 

  31. Coren E, Fisher M. The conduct of systematic research reviews for SCIE knowledge reviews. UK: Social Care Institute for Excellence; 2006. Accessed 19 October 2015.

  32. Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J. 2009;26:91–108.

    Article  Google Scholar 

  33. Dicks LV, Walsh JC, Sutherland WJ. Organising evidence for environmental management decisions: a ‘4S’ hierarchy. Trends Ecol Evol. 2014;29:607–13.

    Article  Google Scholar 

  34. Shepherd J, Dewhirst S, Pickett K, Byrne J, Speller V, Grace M, Almond P, Hartwell D, Roderick P. Factors facilitating and constraining the delivery of effective teacher training to promote health and well-being in schools: a survey of current practice and systematic review. Public Health Res. 2013;1(2):1–187.

    Article  Google Scholar 

  35. Bates S, Coren E. SCIE systematic map report 1. The extent and impact of parental mental health problems on families and the acceptability, accessibility and effectiveness of interventions June 2006. Accessed 19 Oct 2015.

  36. Carr S, Clapton J. Systematic map report 2. The recovery approach in community-based vocational and training adult mental health day services July 2007. Accessed 19 Oct 2015.

  37. Sharif N, Walt Brown W, Rutter D. Systematic map report 3: The extent and impact of depression on BME older people and the acceptability, accessibility and effectiveness of social care provision December 2008. Accessed 19 Oct 2015.

  38. Randall NP, James KL. The effectiveness of integrated farm management, organic farming and agri-environment schemes for conserving biodiversity in temperate Europe—A systematic map. Environ Evid. 2012;1:4.

    Article  Google Scholar 

  39. Shemilt I, Simon A, Hollands GJ, Marteau TM, Ogilvie D, O’Mara-Eves A, Kelly MP, Thomas J. Pinpointing needles in giant haystacks: use of text mining to reduce impractical screening workload in extremely large scoping reviews. Res Synth Methods. 2013;5:31–49.

    Article  Google Scholar 

  40. Snilstveit B, Vojtkova M, Bhavsar A, Gaarder M. Evidence gap maps a tool for promoting evidence-informed policy and prioritizing future research policy research working paper 6725 December 2013. Accessed 19 Oct 2015.

  41. Rankin K, Cameron DB, Ingraham K, Mishra A, Burke J, Picon M, Miranda J, Brown AN. Youth and transferable skills: an evidence gap map. 3ie Evidence Gap Report 2. New Delhi: International Initiative for Impact Evaluation (3ie). Report: and Gap Map: Accessed 19 Oct 2015.

  42. Frampton GK, Harris P, Cooper K, Cooper T, Cleland J, Jones J, Shepherd J, Clegg A, Graves N, Welch K, Cuthbertson BH. Educational interventions for preventing vascular catheter bloodstream infections in critical care: evidence map, systematic review and economic evaluation. Health Technol Asses. 2014;18(15):1–365.

    Article  Google Scholar 

  43. Bragge P, Clavisi O, Turner T, Tavender E, Collie A, Gruen R. The global evidence mapping initiative: scoping research in broad topic areas. BMC Med Res Methodol. 2011;11:92.

    Article  Google Scholar 

  44. Environmental Evidence Journal: Accessed 19 Oct 2015.

  45. Cerutti P, Sola P, Chenevoy A, Iiyama M, Yila J, Zhou W, Djoudi H, Atyi R, Gautier D, Gumbo D, Kuehl Y, Levang P, Martius C, Matthews R, Nasi R, Neufeldt H, Njenga M, Petrokofsky G, Saunders M, Shepherd G, Sonwa D, Sundberg C, van Noordwijk M. The socioeconomic and environmental impacts of wood energy value chains in Sub-Saharan Africa: a systematic map protocol. Environ Evid. 2015;4:12.

    Article  Google Scholar 

  46. Roe D, Fancourt M, Sandbrook C, Sibanda M, Giuliani A, Gordon-Maclean A. Which components or attributes of biodiversity influence which dimensions of poverty? Environ Evid. 2013;3:3.

    Article  Google Scholar 

  47. Neaves LE, Eales J, Whitlock R, Hollingsworth PM, Burke T, Pullin AS. The fitness consequences of inbreeding in natural populations and their implications for species conservation—a systematic map. Environ Evid. 2014;4:5.

    Article  Google Scholar 

  48. Haddaway NR, Styles D, Pullin AS. Environmental impacts of farm land abandonment in high altitude/mountain regions: a systematic map. Environ Evid. 2014;3:17.

    Article  Google Scholar 

  49. Randall NP, Donnison LM, Lewis PJ, James KL. How effective are on-farm mitigation measures for delivering an improved water environment? A systematic map. Environ Evid. 2015;4:18.

    Article  Google Scholar 

  50. Bernes C, Jonsson BG, Junninen K, Lõhmus A, Macdonald E, Müller J, Sandström J. What is the impact of active management on biodiversity in forests set aside for conservation or restoration? A systematic map. Environ Evid. 2015;4:25.

    Article  Google Scholar 

  51. Haddaway NR, Hedlund K, Jackson LE, Kätterer T, Lugato E, Thomsen IK, Jørgensen HB, Söderström B. What are the effects of agricultural management on soil organic carbon in boreo-temperate systems? Environ Evid. 2015;4:23.

    Article  Google Scholar 

  52. Macura B, Secco L, Pullin AS. What evidence exists on the impact of governance type on the conservation effectiveness of forest protected areas? Knowledge base and evidence gaps. Environ Evid. 2015;4:24.

    Article  Google Scholar 

  53. Thomas J, Brunton J, Graziosi S. EPPI-Reviewer 4.0: software for research synthesis. EPPI-Centre Software 2010. London: Social Science Research Unit, Institute of Education, University of London; 2010. Accessed 19 Oct 2015.

  54. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA Statement. PLoS Med. 2009;6(7):e1000097.

    Article  Google Scholar 

  55. Pitkin RM, Branagan MA, Burmeister LF. Accuracy of data in abstracts of published research articles. J Am Med Assoc. 1999;281(12):1110–1.

    Article  CAS  Google Scholar 

  56. Pullin AS. Updating reviews: commitments and opportunities. Environ Evid. 2014;3:18.

    Article  Google Scholar 

  57. Dicks LV, Ashpole JE, Dänhardt J, James K, Jönsson A, Randall N, Showler DA, Smith RK, Turpie S, Williams D, Sutherland WJ. Farmland Conservation: Evidence for the effects of interventions in northern and western Europe. Exeter: Pelagic Publishing; 2014.

    Google Scholar 

Download references

Authors’ contributions

KLJ, NPR and NRH contributed equally to the preparation of this article and KLJ drafted the manuscript. All authors read and approved the final manuscript.


The authors wish to thank anonymous reviewers for comments on an earlier drafts of the manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Nicola P. Randall.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

James, K.L., Randall, N.P. & Haddaway, N.R. A methodology for systematic mapping in environmental sciences. Environ Evid 5, 7 (2016).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: