Skip to main content

Table 2 Classes of bias and their general interpretation

From: Principles and framework for assessing the risk of bias for studies included in comparative quantitative environmental systematic reviews

Risk of bias class

Summary (for further details and examples see [16] and Additional file 5)

1. Bias due to confounding (prior to occurrence of the exposure)

Referred to as “risk of confounding biases” in the CEE tool [16]. These biases arise due to one or more uncontrolled (or inappropriately controlled) variables (confounders) that influence both the exposure and the outcome. If there is confounding then the association between the exposure and outcome will be distorted

Potential confounders may be identified by exploring whether characteristics of the study population (e.g. morphological or physiological differences between individuals, such as colour, age or sex; or characteristics of study plots) are predictive of the outcome effect of interest. Causal directed acyclic graphs (DAG; also known as causal models or causal diagrams) can be a useful tool for investigating the potential of confounding [63, 64]. Randomisation may be used to control confounding but it should not be assumed that randomisation was successfully implemented (e.g. baseline differences between characteristics of the exposure and comparator groups could be suggestive of a problem with the randomisation process [59])

2. Bias in selection of subjects/areas into the study (at or after initiation of the exposure and comparator)

(commonly referred to as selection bias)

Referred to as “risk of post-intervention/exposure selection biases” in the CEE tool [16]. These biases arise when some eligible subjects or areas are excluded in a way that leads to a spurious association between the exposure and outcome, that is, selection bias occurs when selection of subjects or study areas is related to both the exposure and the outcome [62]. Selection bias can arise by unconscious or intentional selection of samples or data such that they confirm or support prior beliefs or values of the investigator (also called confirmation bias). Systematic differences in the selection of subjects or areas into the study can also be caused by missing data, if there is differential missingness between the study groups, and therefore bias due to missing data is a type of selection bias. The CEE tool includes bias due to missing data as a post-intervention/exposure selection bias [16]. We have highlighted bias due to missing data separately, consistent with the ROB2 [62] and ROBINS-I [58] tools, for reasons explained below under bias class 5 “bias due to missing data”

3. Bias due to misclassification of the exposure

(observational studies only—see class 4 below for experimental studies)

Referred to as “risk of misclassified comparison biases” in the CEE tool [16]. These bases arise from misclassification or mismeasurement of the exposure and/or comparator which leads to a misrepresentation of the association between the exposure and the outcome (also known as measurement bias or information bias [65]). Accurate and precise definitions of exposure and comparator groups are necessary for avoiding misclassification

4. Bias due to deviation from the planned exposure (intervention) in experimental studies (also called performance bias)

(experimental studies only—see class 3 above for observational studies)

Referred to as “risk of performance biases” in the CEE tool [16]. These biases arise from alteration of the planned exposure or comparator treatment procedure(s) of interest after the start of the exposure, when the subjects or areas of interest continue to be analysed according to their intended exposure treatment

Deviations from the planned exposure could include the presence of co-exposures/co-interventions other than those intended; failure to implement some or all of the exposure components as intended; lack of adherence of subjects or areas to the intended exposure protocol; inadvertent application of one of the studied exposure protocols to subjects or areas intended to receive the other (contamination); and switches of subjects or areas from the intended exposure to other interventions/exposures (or to none)

5. Bias due to missing data (also called attrition bias)

Bias due to missing data can be considered as a type of selection bias; in the CEE tool [16], bias due to missing data is included in the “risk of post-intervention/exposure selection biases” (i.e. bias class 2 above). We have highlighted bias due to missing data separately here to raise awareness of the importance of checking studies for missing data, given that 8 of the 10 recently-published CEE systematic reviews did not consider risks of bias due to missing data (Additional file 2)

Risks of bias due to missing data can arise when later follow up data of subjects or areas that are initially included and followed in the study are not fully available for inclusion in the analysis of the effect estimate. The risk of bias depends on there being (i) an imbalance in the amount of missing data between the exposure and comparator groups (differential missingness); (ii) the reason(s) for the data being missing being related to the exposure or the outcome; and (iii) the proportion of the intended analysis population that is missing being considered sufficient that the bias would substantively influence the effect estimate [58, 65]

6. Bias in measurement of outcomes

(also called detection bias)

Referred to as “risk of detection biases” in the CEE tool [16]. These are biases arising from systematic differences in measurements of outcomes (also known as measurement bias [65]). Systematic errors in measurement of outcomes may occur if outcome data are determined differently between the exposure and comparator groups, either intentionally (e.g. influence of desire to obtain a certain direction of effect) or unintentionally (e.g. due to cognitive bias or human errors). When studying complex systems, and especially when many steps are involved in measuring outcomes, each calibration method or applied instrument may need to be the same between groups; if any devices or their measurements differ between study groups this may introduce bias [66]

7. Bias in selection of the reported result

(also called reporting biases)

Referred to as “risk of outcome reporting biases” in the CEE tool [16]. These are biases arising from selective reporting of study findings. Selective reporting may appear at three different levels [62]: (i) presentation of selected findings from multiple measurements; (ii) presentation of results for selected subgroups or subpopulations of the planned analysis population; and (iii) presentation of selective findings from multiple analyses

8. Bias due to an inappropriate statistical analysis approach (may also be called statistical conclusion validity)

Referred to as “risk of outcome assessment biases” in the CEE tool [16]. These are biases due to errors in statistical methods applied within the individual studies included in a systematic review. There is currently no such bias class in widely applied risk-of-bias assessment tools in medicine and health research (RoB 2 [62] and ROBINS-I [58]) although it has been argued that this is an important source of bias that should be considered [55]. Issues with statistical validity can be divided into four main areas: (i) data analysts’ awareness of the exposure or comparator received by study subjects or areas (blinding of data analysts could mitigate the risk of bias); (ii) errors in applied descriptive statistics (e.g. miscalculation of sample sizes, means, or variances, including pseudoreplication [67]); (iii) errors in applied inferential statistics (including flawed null hypothesis testing, estimation, or coding); (iv) use of inappropriate statistical tests or violation of assumptions required by tests (e.g. criteria for normality and equal variances are not satisfied)

9. Other risks of bias

Any risks of bias or confounding pertinent to the study design(s) of interest that are not covered in the eight classes of bias above. Includes risks of bias that are inherent to specific study designs such as test accuracy studies [68, 69]