Difference Between Literature Review Systematic Review and Meta Analysis

  • Journal List
  • Korean J Anesthesiol
  • v.71(two); 2018 Apr
  • PMC5903119

Korean J Anesthesiol. 2018 April; 71(2): 103–112.

Introduction to systematic review and meta-analysis

EunJin Ahn

1Department of Anesthesiology and Pain Medicine, Inje University Seoul Paik Infirmary, Seoul, Korea

Hyun Kang

2Section of Anesthesiology and Pain Medicine, Chung-Ang University Higher of Medicine, Seoul, Korea

Received 2017 Dec 13; Revised 2018 Feb 28; Accepted 2018 Mar 14.

Abstract

Systematic reviews and meta-analyses present results by combining and analyzing data from different studies conducted on similar research topics. In recent years, systematic reviews and meta-analyses accept been actively performed in diverse fields including anesthesiology. These research methods are powerful tools that tin overcome the difficulties in performing large-scale randomized controlled trials. However, the inclusion of studies with whatever biases or improperly assessed quality of evidence in systematic reviews and meta-analyses could yield misleading results. Therefore, various guidelines have been suggested for conducting systematic reviews and meta-analyses to aid standardize them and meliorate their quality. Nonetheless, accepting the conclusions of many studies without agreement the meta-analysis tin exist dangerous. Therefore, this article provides an piece of cake introduction to clinicians on performing and understanding meta-analyses.

Keywords: Anesthesiology, Meta-analysis, Randomized controlled trial, Systematic review

Introduction

A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [i]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the written report results is conducted on the basis of their quality. A meta-analysis is a valid, objective, and scientific method of analyzing and combining different results. Ordinarily, in guild to obtain more reliable results, a meta-assay is mainly conducted on randomized controlled trials (RCTs), which accept a loftier level of evidence [2] (Fig. ane). Since 1999, various papers have presented guidelines for reporting meta-analyses of RCTs. Following the Quality of Reporting of Meta-analyses (QUORUM) statement [3], and the appearance of registers such as Cochrane Library'due south Methodology Register, a large number of systematic literature reviews take been registered. In 2009, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [4] was published, and it greatly helped standardize and improve the quality of systematic reviews and meta-analyses [five].

An external file that holds a picture, illustration, etc.  Object name is kjae-2018-71-2-103f1.jpg

In anesthesiology, the importance of systematic reviews and meta-analyses has been highlighted, and they provide diagnostic and therapeutic value to various areas, including not only perioperative management but also intensive care and outpatient anesthesia [6–13]. Systematic reviews and meta-analyses include various topics, such every bit comparing various treatments of postoperative nausea and vomiting [14,fifteen], comparing general anesthesia and regional anesthesia [sixteen–18], comparing airway maintenance devices [8,19], comparing various methods of postoperative pain command (e.yard., patient-controlled analgesia pumps, nervus block, or analgesics) [twenty–23], comparing the precision of various monitoring instruments [7], and meta-analysis of dose-response in various drugs [12].

Thus, literature reviews and meta-analyses are being conducted in diverse medical fields, and the aim of highlighting their importance is to help better extract accurate, good quality data from the flood of data beingness produced. All the same, a lack of understanding about systematic reviews and meta-analyses tin can atomic number 82 to incorrect outcomes being derived from the review and analysis processes. If readers indiscriminately accept the results of the many meta-analyses that are published, incorrect data may exist obtained. Therefore, in this review, we aim to describe the contents and methods used in systematic reviews and meta-analyses in a mode that is easy to understand for future authors and readers of systematic review and meta-analysis.

Study Planning

It is easy to confuse systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to notice answers to a certain research question, by collecting all bachelor studies related to that question and reviewing and analyzing their results. A meta-assay differs from a systematic review in that it uses statistical methods on estimates from two or more than different studies to class a pooled estimate [i]. Following a systematic review, if it is not possible to course a pooled guess, it can be published equally is without progressing to a meta-analysis; nonetheless, if it is possible to form a pooled estimate from the extracted data, a meta-analysis can be attempted. Systematic reviews and meta-analyses usually go on co-ordinate to the flowchart presented in Fig. 2. We explicate each of the stages below.

An external file that holds a picture, illustration, etc.  Object name is kjae-2018-71-2-103f2.jpg

Flowchart illustrating a systematic review.

Formulating research questions

A systematic review attempts to gather all available empirical research by using clearly defined, systematic methods to obtain answers to a specific question. A meta-analysis is the statistical procedure of analyzing and combining results from several similar studies. Here, the definition of the give-and-take "similar" is not fabricated clear, but when selecting a topic for the meta-analysis, it is essential to ensure that the different studies present data that can exist combined. If the studies contain data on the same topic that can be combined, a meta-analysis can fifty-fifty exist performed using data from but two studies. Nonetheless, written report selection via a systematic review is a precondition for performing a meta-assay, and it is important to conspicuously ascertain the Population, Intervention, Comparison, Outcomes (PICO) parameters that are central to testify-based inquiry. In addition, pick of the enquiry topic is based on logical evidence, and it is important to select a topic that is familiar to readers without clearly confirmed the testify [24].

Protocols and registration

In systematic reviews, prior registration of a detailed enquiry programme is very important. In order to make the research process transparent, primary/secondary outcomes and methods are ready in advance, and in the event of changes to the method, other researchers and readers are informed when, how, and why. Many studies are registered with an organization like PROSPERO (http://www.crd.york.air-conditioning.uk/PROSPERO/), and the registration number is recorded when reporting the study, in guild to share the protocol at the time of planning.

Defining inclusion and exclusion criteria

Data is included on the study pattern, patient characteristics, publication status (published or unpublished), linguistic communication used, and research menstruation. If there is a discrepancy between the number of patients included in the study and the number of patients included in the analysis, this needs to be clearly explained while describing the patient characteristics, to avert disruptive the reader.

Literature search and written report choice

In order to secure proper basis for prove-based research, it is essential to perform a broad search that includes equally many studies equally possible that meet the inclusion and exclusion criteria. Typically, the three bibliographic databases Medline, Embase, and Cochrane Primal Register of Controlled Trials (CENTRAL) are used. In domestic studies, the Korean databases KoreaMed, KMBASE, and RISS4U may be included. Effort is required to identify not simply published studies but also abstracts, ongoing studies, and studies awaiting publication. Amid the studies retrieved in the search, the researchers remove duplicate studies, select studies that meet the inclusion/exclusion criteria based on the abstracts, and so make the final selection of studies based on their full text. In guild to maintain transparency and objectivity throughout this process, written report selection is conducted independently by at to the lowest degree two investigators. When there is a inconsistency in opinions, intervention is required via debate or by a 3rd reviewer. The methods for this process besides need to be planned in advance. Information technology is essential to ensure the reproducibility of the literature selection procedure [25].

Quality of evidence

Notwithstanding, well planned the systematic review or meta-assay is, if the quality of evidence in the studies is low, the quality of the meta-analysis decreases and incorrect results can be obtained [26]. Even when using randomized studies with a high quality of testify, evaluating the quality of evidence precisely helps make up one's mind the strength of recommendations in the meta-analysis. One method of evaluating the quality of bear witness in non-randomized studies is the Newcastle-Ottawa Calibration, provided by the Ottawa Hospital Research Institute 1) . However, we are mostly focusing on meta-analyses that utilize randomized studies.

If the Grading of Recommendations, Cess, Evolution and Evaluations (Class) system (http://www.gradeworkinggroup.org/) is used, the quality of evidence is evaluated on the basis of the report limitations, inaccuracies, incompleteness of effect data, indirectness of evidence, and risk of publication bias, and this is used to determine the forcefulness of recommendations [27]. As shown in Tabular array ane, the study limitations are evaluated using the "take a chance of bias" method proposed by Cochrane two) . This method classifies bias in randomized studies as "low," "high," or "unclear" on the basis of the presence or absence of six processes (random sequence generation, allocation concealment, blinding participants or investigators, incomplete outcome data, selective reporting, and other biases) [28].

Tabular array i.

The Cochrane Collaboration's Tool for Assessing the Risk of Bias [28]

Domain Support of judgement Review writer's judgement
Sequence generation Describe the method used to generate the allocation sequence in sufficient detail to allow for an assessment of whether it should produce comparable groups. Choice bias (biased allocation to interventions) due to inadequate generation of a randomized sequence.
Allocation darkening Depict the method used to conceal the allocation sequence in sufficient detail to determine whether intervention allocations could have been foreseen in advance of, or during, enrollment. Selection bias (biased allocation to interventions) due to inadequate darkening of allocations prior to assignment.
Blinding Draw all measures used, if any, to blind study participants and personnel from cognition of which intervention a participant received. Operation bias due to cognition of the allocated interventions by participants and personnel during the report.
Draw all measures used, if whatsoever, to blind written report event assessors from cognition of which intervention a participant received. Detection bias due to knowledge of the allocated interventions by outcome assessors.
Incomplete event data Draw the completeness of outcome information for each main outcome, including attrition and exclusions from the analysis. Land whether attrition and exclusions were reported, the numbers in each intervention grouping, reasons for attrition/exclusions where reported, and whatever re-inclusions in analyses performed by the review authors. Compunction bias due to amount, nature, or treatment of incomplete outcome data.
Selective reporting Land how the possibility of selective upshot reporting was examined by the review authors, and what was institute. Reporting bias due to selective outcome reporting.
Other bias Land any of import concerns about bias not addressed in the other domains in the tool. Bias due to problems non covered elsewhere in the table.
If particular questions/entries were prespecified in the reviews protocol, responses should exist provided for each question/entry.

Information extraction

Ii different investigators extract data based on the objectives and form of the study; thereafter, the extracted data are reviewed. Since the size and format of each variable are different, the size and format of the outcomes are also different, and slight changes may be required when combining the data [29]. If there are differences in the size and format of the outcome variables that cause difficulties combining the data, such as the use of dissimilar evaluation instruments or different evaluation timepoints, the assay may be express to a systematic review. The investigators resolve differences of opinion by contend, and if they fail to reach a consensus, a 3rd-reviewer is consulted.

Data Assay

The aim of a meta-analysis is to derive a conclusion with increased power and accuracy than what could not be able to achieve in individual studies. Therefore, earlier analysis, it is crucial to evaluate the direction of effect, size of upshot, homogeneity of effects among studies, and strength of evidence [30]. Thereafter, the data are reviewed qualitatively and quantitatively. If it is determined that the unlike research outcomes cannot be combined, all the results and characteristics of the individual studies are displayed in a tabular array or in a descriptive form; this is referred to as a qualitative review. A meta-analysis is a quantitative review, in which the clinical effectiveness is evaluated past calculating the weighted pooled estimate for the interventions in at to the lowest degree 2 split up studies.

The pooled estimate is the outcome of the meta-assay, and is typically explained using a forest plot (Figs. 3 and iv). The black squares in the woods plot are the odds ratios (ORs) and 95% confidence intervals in each study. The area of the squares represents the weight reflected in the meta-analysis. The blackness diamond represents the OR and 95% confidence interval calculated across all the included studies. The bold vertical line represents a lack of therapeutic effect (OR = 1); if the conviction interval includes OR = ane, it ways no pregnant departure was found betwixt the handling and control groups.

An external file that holds a picture, illustration, etc.  Object name is kjae-2018-71-2-103f3.jpg

Forest plot analyzed by 2 different models using the same data. (A) Fixed-event model. (B) Random-effect model. The figure depicts individual trials every bit filled squares with the relative sample size and the solid line as the 95% confidence interval of the difference. The diamond shape indicates the pooled estimate and uncertainty for the combined effect. The vertical line indicates the treatment grouping shows no effect (OR = 1). Moreover, if the conviction interval includes ane, then the result shows no show of departure between the handling and control groups.

An external file that holds a picture, illustration, etc.  Object name is kjae-2018-71-2-103f4.jpg

Forest plot representing homogeneous data.

Dichotomous variables and continuous variables

In data analysis, consequence variables tin be considered broadly in terms of dichotomous variables and continuous variables. When combining data from continuous variables, the mean difference (Medico) and standardized hateful departure (SMD) are used (Tabular array 2).

Table 2.

Summary of Meta-assay Methods Bachelor in RevMan [28]

Type of data Consequence measure out Fixed-result methods Random-effect methods
Dichotomous Odds ratio (OR) Mantel-Haenszel (M-H) Mantel-Haenszel (M-H)
Changed variance (Iv) Changed variance (Four)
Peto
Risk ratio (RR), Mantel-Haenszel (M-H) Mantel-Haenszel (M-H)
Risk difference (RD) Inverse variance (IV) Changed variance (Four)
Continuous Mean difference (Doctor), Standardized mean difference (SMD) Inverse variance (IV) Changed variance (4)

Chiliad D = A b s o l u t due east d i f f due east r e n c e b east t w e eastward n t h due east one thousand eastward a n v a l u e i n t due west o g r o u p southward S Chiliad D = D i f f e r e due north c e i n m eastward a n o u t c o one thousand e b eastward t due west east e n grand r o u p s South t a n d a r d d e five i a t i o due north o f o u t c o 1000 due east a m o due north one thousand p a r t i c i p a n t southward

The MD is the absolute departure in mean values between the groups, and the SMD is the mean difference betwixt groups divided past the standard deviation. When results are presented in the same units, the MD can be used, only when results are presented in different units, the SMD should be used. When the Physician is used, the combined units must be shown. A value of "0" for the MD or SMD indicates that the effects of the new treatment method and the existing handling method are the same. A value lower than "0" means the new treatment method is less effective than the existing method, and a value greater than "0" means the new treatment is more effective than the existing method.

When combining data for dichotomous variables, the OR, risk ratio (RR), or risk difference (RD) can be used. The RR and RD can be used for RCTs, quasi-experimental studies, or cohort studies, and the OR can be used for other case-control studies or cross-sectional studies. Still, because the OR is hard to translate, using the RR and RD, if possible, is recommended. If the outcome variable is a dichotomous variable, it can be presented equally the number needed to treat (NNT), which is the minimum number of patients who demand to be treated in the intervention group, compared to the command group, for a given event to occur in at least ane patient. Based on Table 3, in an RCT, if x is the probability of the effect occurring in the control group and y is the probability of the event occurring in the intervention grouping, then x = c/(c + d), y = a/(a + b), and the absolute risk reduction (ARR) = x − y. NNT can be obtained as the reciprocal, 1/ARR.

Table 3.

Calculation of the Number Needed to Care for in the Dichotomous tabular array

Event occurred Result not occurred Sum
Intervention A B a + b
Command C D c + d

Fixed-result models and random-event models

In order to analyze effect size, two types of models tin can be used: a fixed-result model or a random-event model. A fixed-effect model assumes that the consequence of treatment is the aforementioned, and that variation between results in different studies is due to random fault. Thus, a fixed-consequence model can be used when the studies are considered to take the aforementioned blueprint and methodology, or when the variability in results within a study is small, and the variance is thought to exist due to random error. Three common methods are used for weighted estimation in a fixed-effect model: 1) inverse variance-weighted estimation 3) , ii) Mantel-Haenszel estimation 4) , and 3) Peto estimation 5) .

A random-effect model assumes heterogeneity betwixt the studies being combined, and these models are used when the studies are assumed unlike, even if a heterogeneity test does not show a significant result. Different a fixed-effect model, a random-result model assumes that the size of the effect of treatment differs among studies. Thus, differences in variation amid studies are thought to be due to not but random error but as well between-written report variability in results. Therefore, weight does not decrease greatly for studies with a small number of patients. Among methods for weighted estimation in a random-effect model, the DerSimonian and Laird method half-dozen) is mostly used for dichotomous variables, as the simplest method, while inverse variance-weighted estimation is used for continuous variables, as with stock-still-effect models. These four methods are all used in Review Director software (The Cochrane Collaboration, UK), and are described in a study past Deeks et al. [31] (Table 2). Still, when the number of studies included in the analysis is less than 10, the Hartung-Knapp-Sidik-Jonkman method 7) tin can amend reduce the risk of type one error than does the DerSimonian and Laird method [32].

Fig. 3 shows the results of analyzing upshot data using a stock-still-effect model (A) and a random-result model (B). As shown in Fig. three, while the results from big studies are weighted more heavily in the fixed-effect model, studies are given relatively similar weights irrespective of written report size in the random-effect model. Although identical information were beingness analyzed, as shown in Fig. iii, the significant result in the fixed-upshot model was no longer pregnant in the random-effect model. 1 representative example of the small study effect in a random-event model is the meta-assay by Li et al. [33]. In a large-scale study, intravenous injection of magnesium was unrelated to acute myocardial infarction, only in the random-event model, which included numerous small studies, the small-scale study upshot resulted in an association being found betwixt intravenous injection of magnesium and myocardial infarction. This small study effect can be controlled for by using a sensitivity assay, which is performed to examine the contribution of each of the included studies to the final meta-analysis upshot. In detail, when heterogeneity is suspected in the written report methods or results, by irresolute certain data or analytical methods, this method makes information technology possible to verify whether the changes affect the robustness of the results, and to examine the causes of such effects [34].

Heterogeneity

Homogeneity examination is a method whether the caste of heterogeneity is greater than would be expected to occur naturally when the effect size calculated from several studies is higher than the sampling error. This makes it possible to test whether the outcome size calculated from several studies is the aforementioned. Three types of homogeneity tests tin be used: 1) forest plot, 2) Cochrane'due south Q test (chi-squared), and 3) Higgins I2 statistics. In the wood plot, equally shown in Fig. four, greater overlap between the confidence intervals indicates greater homogeneity. For the Q statistic, when the P value of the chi-squared test, calculated from the forest plot in Fig. 4, is less than 0.ane, information technology is considered to show statistical heterogeneity and a random-outcome tin exist used. Finally, Iii can exist used [35].

I 2 = 100% × (Q -d f)/Q Q:c h i -southward q u a r e dsouth t a t i s t i c d f:d e yard r due east eo ff r e e d o mo fQs t a t i s t i c

I2 , calculated as shown above, returns a value between 0 and 100%. A value less than 25% is considered to show potent homogeneity, a value of 50% is average, and a value greater than 75% indicates stiff heterogeneity.

Even when the data cannot be shown to exist homogeneous, a fixed-effect model can be used, ignoring the heterogeneity, and all the report results can be presented individually, without combining them. Even so, in many cases, a random-effect model is practical, as described higher up, and a subgroup assay or meta-regression analysis is performed to explain the heterogeneity. In a subgroup analysis, the data are divided into subgroups that are expected to be homogeneous, and these subgroups are analyzed. This needs to exist planned in the predetermined protocol before starting the meta-assay. A meta-regression analysis is like to a normal regression analysis, except that the heterogeneity between studies is modeled. This process involves performing a regression analysis of the pooled estimate for covariance at the report level, and then information technology is unremarkably not considered when the number of studies is less than ten. Here, univariate and multivariate regression analyses can both be considered.

Publication bias

Publication bias is the most common blazon of reporting bias in meta-analyses. This refers to the baloney of meta-analysis outcomes due to the college likelihood of publication of statistically meaning studies rather than non-significant studies. In order to exam the presence or absenteeism of publication bias, starting time, a funnel plot can exist used (Fig. v). Studies are plotted on a scatter plot with consequence size on the x-axis and precision or total sample size on the y-axis. If the points form an upside-down funnel shape, with a broad base that narrows towards the peak of the plot, this indicates the absence of a publication bias (Fig. 5A) [29,36]. On the other mitt, if the plot shows an disproportionate shape, with no points on one side of the graph, then publication bias tin can be suspected (Fig. 5B). Second, to test publication bias statistically, Begg and Mazumdar's rank correlation exam 8) [37] or Egger's test 9) [29] can exist used. If publication bias is detected, the trim-and-make full method 10) can exist used to right the bias [38]. Fig. 6 displays results that show publication bias in Egger's test, which has and so been corrected using the trim-and-fill method using Comprehensive Meta-Assay software (Biostat, United states of america).

An external file that holds a picture, illustration, etc.  Object name is kjae-2018-71-2-103f5.jpg

Funnel plot showing the effect size on the 10-axis and sample size on the y-centrality as a besprinkle plot. (A) Funnel plot without publication bias. The individual plots are broader at the bottom and narrower at the elevation. (B) Funnel plot with publication bias. The individual plots are located asymmetrically.

An external file that holds a picture, illustration, etc.  Object name is kjae-2018-71-2-103f6.jpg

Funnel plot adjusted using the trim-and-fill method. White circles: comparisons included. Blackness circles: inputted comparisons using the trim-and-fill method. White diamond: pooled observed log adventure ratio. Black diamond: pooled inputted log hazard ratio.

Result Presentation

When reporting the results of a systematic review or meta-analysis, the analytical content and methods should be described in detail. Offset, a flowchart is displayed with the literature search and selection process according to the inclusion/exclusion criteria. Second, a tabular array is shown with the characteristics of the included studies. A tabular array should also be included with data related to the quality of evidence, such as GRADE (Table 4). Third, the results of data assay are shown in a forest plot and funnel plot. Quaternary, if the results use dichotomous information, the NNT values tin can be reported, as described above.

Table 4.

The GRADE Evidence Quality for Each Outcome

Quality assessment
Number of patients
Upshot
Quality Importance
Northward ROB Inconsistency Indirectness Imprecision Others Palonosetron (%) Ramosetron (%) RR (CI)
PON half-dozen Serious Serious Not serious Not serious None 81/304 (26.6) lxxx/305 (26.2) 0.92 (0.54 to 1.58) Very depression Important
POV v Serious Serious Not serious Non serious None 55/274 (20.1) 60/275 (21.8) 0.87 (0.48 to 1.57) Very depression Of import
PONV three Non serious Serious Not serious Not serious None 108/184 (58.7) 107/186 (57.five) 0.92 (0.54 to 1.58) Depression Important

When Review Managing director software (The Cochrane Collaboration, UK) is used for the analysis, two types of P values are given. The first is the P value from the z-examination, which tests the null hypothesis that the intervention has no outcome. The second P value is from the chi-squared examination, which tests the nothing hypothesis for a lack of heterogeneity. The statistical issue for the intervention outcome, which is generally considered the most important event in meta-analyses, is the z-test P value.

A common mistake when reporting results is, given a z-test P value greater than 0.05, to say there was "no statistical significance" or "no deviation." When evaluating statistical significance in a meta-analysis, a P value lower than 0.05 tin can be explained equally "a meaning difference in the effects of the ii treatment methods." All the same, the P value may appear non-meaning whether or not there is a divergence betwixt the two handling methods. In such a situation, it is better to announce "there was no stiff evidence for an result," and to nowadays the P value and confidence intervals. Another mutual mistake is to think that a smaller P value is indicative of a more significant consequence. In meta-analyses of large-scale studies, the P value is more than greatly affected by the number of studies and patients included, rather than by the significance of the results; therefore, care should be taken when interpreting the results of a meta-analysis.

Determination

When performing a systematic literature review or meta-analysis, if the quality of studies is non properly evaluated or if proper methodology is not strictly applied, the results can be biased and the outcomes tin can be incorrect. However, when systematic reviews and meta-analyses are properly implemented, they can yield powerful results that could commonly only exist achieved using big-calibration RCTs, which are difficult to perform in individual studies. As our understanding of show-based medicine increases and its importance is ameliorate appreciated, the number of systematic reviews and meta-analyses will keep increasing. However, indiscriminate credence of the results of all these meta-analyses tin exist dangerous, and hence, we recommend that their results exist received critically on the basis of a more accurate understanding.

Footnotes

ane)http://www.ohri.ca.

two)http://methods.cochrane.org/bias/assessing-risk-bias-included-studies.

3)The inverse variance-weighted estimation method is useful if the number of studies is small with big sample sizes.

four)The Mantel-Haenszel interpretation method is useful if the number of studies is large with modest sample sizes.

5)The Peto estimation method is useful if the issue rate is low or ane of the 2 groups shows zip incidence.

6)The most pop and simplest statistical method used in Review Manager and Comprehensive Meta-analysis software.

7)Alternative random-effect model meta-assay that has more adequate fault rates than does the common DerSimonian and Laird method, peculiarly when the number of studies is small. Withal, even with the Hartung-Knapp-Sidik-Jonkman method, when there are less than 5 studies with very unequal sizes, extra circumspection is needed.

8)The Begg and Mazumdar rank correlation test uses the correlation between the ranks of result sizes and the ranks of their variances [37].

9)The degree of funnel plot asymmetry as measured by the intercept from the regression of standard normal deviates against precision [29].

10)If in that location are more pocket-size studies on one side, nosotros expect the suppression of studies on the other side. Trimming yields the adjusted effect size and reduces the variance of the effects by adding the original studies back into the analysis as a mirror image of each report.

References

1. Kang H. Statistical considerations in meta-analysis. Hanyang Med Rev. 2015;35:23–32. [Google Scholar]

two. Uetani K, Nakayama T, Ikai H, Yonemoto North, Moher D. Quality of reports on randomized controlled trials conducted in Nippon: evaluation of adherence to the CONSORT statement. Intern Med. 2009;48:307–thirteen. [PubMed] [Google Scholar]

3. Moher D, Cook DJ, Eastwood Southward, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet. 1999;354:1896–900. [PubMed] [Google Scholar]

iv. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: caption and elaboration. J Clin Epidemiol. 2009;62:e1–34. [PubMed] [Google Scholar]

5. Willis BH, Quigley M. The cess of the quality of reporting of meta-analyses in diagnostic research: a systematic review. BMC Med Res Methodol. 2011;11:163. [PMC gratis article] [PubMed] [Google Scholar]

half dozen. Chebbout R, Heywood EG, Drake TM, Wild JR, Lee J, Wilson Chiliad, et al. A systematic review of the incidence of and take a chance factors for postoperative atrial fibrillation following general surgery. Anaesthesia. 2018;73:490–8. [PubMed] [Google Scholar]

vii. Chiang MH, Wu SC, Hsu SW, Mentum JC. Bispectral Alphabetize and non-Bispectral Index anesthetic protocols on postoperative recovery outcomes. Minerva Anestesiol. 2018;84:216–28. [PubMed] [Google Scholar]

8. Damodaran S, Sethi S, Malhotra SK, Samra T, Maitra S, Saini V. Comparison of oropharyngeal leak pressure level of air-Q, i-gel, and laryngeal mask airway supreme in adult patients during general anesthesia: A randomized controlled trial. Saudi J Anaesth. 2017;11:390–5. [PMC costless commodity] [PubMed] [Google Scholar]

9. Kim MS, Park JH, Choi YS, Park SH, Shin S. Efficacy of palonosetron vs. ramosetron for the prevention of postoperative nausea and vomiting: a meta-assay of randomized controlled trials. Yonsei Med J. 2017;58:848–58. [PMC free article] [PubMed] [Google Scholar]

ten. Lam T, Nagappa One thousand, Wong J, Singh M, Wong D, Chung F. Continuous pulse oximetry and capnography monitoring for postoperative respiratory depression and adverse events: a systematic review and meta-analysis. Anesth Analg. 2017;125:2019–29. [PubMed] [Google Scholar]

11. Landoni Chiliad, Biondi-Zoccai GG, Zangrillo A, Bignami E, D'Avolio S, Marchetti C, et al. Desflurane and sevoflurane in cardiac surgery: a meta-assay of randomized clinical trials. J Cardiothorac Vasc Anesth. 2007;21:502–11. [PubMed] [Google Scholar]

12. Lee A, Ngan Kee WD, Gin T. A dose-response meta-analysis of rubber intravenous ephedrine for the prevention of hypotension during spinal anesthesia for elective cesarean delivery. Anesth Analg. 2004;98:483–ninety. [PubMed] [Google Scholar]

13. Xia ZQ, Chen SQ, Yao X, Xie CB, Wen SH, Liu KX. Clinical benefits of dexmedetomidine versus propofol in adult intensive intendance unit patients: a meta-analysis of randomized clinical trials. J Surg Res. 2013;185:833–43. [PubMed] [Google Scholar]

14. Ahn E, Choi 1000, Kang H, Baek C, Jung Y, Woo Y, et al. Palonosetron and ramosetron compared for effectiveness in preventing postoperative nausea and vomiting: a systematic review and meta-analysis. PLoS One. 2016;11:e0168509. [PMC free article] [PubMed] [Google Scholar]

15. Ahn EJ, Kang H, Choi GJ, Baek CW, Jung YH, Woo YC. The effectiveness of midazolam for preventing postoperative nausea and airsickness: a systematic review and meta-analysis. Anesth Analg. 2016;122:664–76. [PubMed] [Google Scholar]

16. Yeung J, Patel 5, Champaneria R, Dretzke J. Regional versus general anaesthesia in elderly patients undergoing surgery for hip fracture: protocol for a systematic review. Syst Rev. 2016;5:66. [PMC free article] [PubMed] [Google Scholar]

17. Zorrilla-Vaca A, Healy RJ, Mirski MA. A comparison of regional versus general anesthesia for lumbarspine surgery: a meta-assay of randomized studies. J Neurosurg Anesthesiol. 2017;29:415–25. [PubMed] [Google Scholar]

eighteen. Zuo D, Jin C, Shan M, Zhou L, Li Y. A comparison of general versus regional anesthesia for hip fracture surgery: a meta-analysis. Int J Clin Exp Med. 2015;8:20295–301. [PMC free commodity] [PubMed] [Google Scholar]

19. Ahn EJ, Choi GJ, Kang H, Baek CW, Jung YH, Woo YC, et al. Comparative efficacy of the air-q intubating laryngeal airway during general anesthesia in pediatric patients: a systematic review and meta-analysis. Biomed Res Int. 2016;2016:6406391. [PMC free article] [PubMed] [Google Scholar]

20. Kirkham KR, Grape S, Martin R, Albrecht East. Analgesic efficacy of local infiltration analgesia vs. femoral nerve block afterward anterior cruciate ligament reconstruction: a systematic review and meta-analysis. Anaesthesia. 2017;72:1542–53. [PubMed] [Google Scholar]

21. Tang Y, Tang 10, Wei Q, Zhang H. Intrathecal morphine versus femoral nervus block for pain control after total human knee arthroplasty: a metaanalysis. J Orthop Surg Res. 2017;12:125. [PMC free commodity] [PubMed] [Google Scholar]

22. Hussain N, Goldar Grand, Ragina Due north, Banfield L, Laffey JG, Abdallah FW. Suprascapular and interscalene nervus cake for shoulder surgery: a systematic review and meta-analysis. Anesthesiology. 2017;127:998–1013. [PubMed] [Google Scholar]

23. Wang K, Zhang HX. Liposomal bupivacaine versus interscalene nerve block for pain control after total shoulder arthroplasty: A systematic review and meta-assay. Int J Surg. 2017;46:61–70. [PubMed] [Google Scholar]

24. Stewart LA, Clarke M, Rovers G, Riley RD, Simmonds M, Stewart G, et al. Preferred reporting items for systematic review and meta-analyses of private participant data: the PRISMA-IPD Argument. JAMA. 2015;313:1657–65. [PubMed] [Google Scholar]

25. Kang H. How to empathise and conduct evidence-based medicine. Korean J Anesthesiol. 2016;69:435–45. [PMC free article] [PubMed] [Google Scholar]

26. Guyatt GH, Oxman Ad, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of prove and force of recommendations. BMJ. 2008;336:924–vi. [PMC gratuitous commodity] [PubMed] [Google Scholar]

27. Dijkers G. Introducing GRADE: a systematic arroyo to rating bear witness in systematic reviews and to guideline development. Knowl Translat Update. 2013;i:1–9. [Google Scholar]

28. Higgins JP, Altman DG, Sterne JA. Affiliate 8: Assessing the hazard of bias in included studies. In: Cochrane Handbook for Systematic Reviews of Interventions: The Cochrane Collaboration 2011. updated 2017 Jun. cited 2017 Dec xiii. Available from http://handbook.cochrane.org.

29. Egger M, Schneider M, Davey Smith G. Spurious precision? Meta-analysis of observational studies. BMJ. 1998;316:140–iv. [PMC free article] [PubMed] [Google Scholar]

xxx. Higgins JP, Altman DG, Sterne JA. Chapter 9: Assessing the risk of bias in included studies. In: Cochrane Handbook for Systematic Reviews of Interventions: The Cochrane Collaboration 2011. updated 2017 Jun. cited 2017 Dec xiii. Available from http://handbook.cochrane.org.

31. Deeks JJ, Altman DG, Bradburn MJ. Statistical methods for examining heterogeneity and combining results from several studies in meta-analysis. In: Systematic Reviews in Health Care. In: Egger M, Smith GD, Altman DG, editors. London: BMJ Publishing Group; 2008. pp. 285–312. [Google Scholar]

32. IntHout J, Ioannidis JP, Borm GF. The Hartung-Knapp-Sidik-Jonkman method for random effects meta-assay is straightforward and considerably outperforms the standard DerSimonian-Laird method. BMC Med Res Methodol. 2014;14:25. [PMC free article] [PubMed] [Google Scholar]

33. Li J, Zhang Q, Zhang M, Egger M. Intravenous magnesium for acute myocardial infarction. Cochrane Database Syst Rev. 2007;(2):CD002755. [PMC complimentary article] [PubMed] [Google Scholar]

34. Thompson SG. Controversies in meta-analysis: the case of the trials of serum cholesterol reduction. Stat Methods Med Res. 1993;ii:173–92. [PubMed] [Google Scholar]

35. Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327:557–60. [PMC free article] [PubMed] [Google Scholar]

36. Sutton AJ, Abrams KR, Jones DR. An illustrated guide to the methods of meta-analysis. J Eval Clin Pract. 2001;7:135–48. [PubMed] [Google Scholar]

37. Begg CB, Mazumdar M. Operating characteristics of a rank correlation exam for publication bias. Biometrics. 1994;50:1088–101. [PubMed] [Google Scholar]

38. Duval S, Tweedie R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-assay. Biometrics. 2000;56:455–63. [PubMed] [Google Scholar]


Manufactures from Korean Journal of Anesthesiology are provided hither courtesy of Korean Society of Anesthesiologists


richardsonstrowd.blogspot.com

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5903119/

0 Response to "Difference Between Literature Review Systematic Review and Meta Analysis"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel