Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta‐analysis, and clinical practice guideline: a systematic review

The methodological quality assessment tools for preclinical and clinical studies, systematic... Objective To systematically review the methodological assessment tools for pre‐clinical and clinical studies, systematic review and meta‐analysis, and clinical practice guideline. Methods We searched PubMed, the Cochrane Handbook for Systematic Reviews of Interventions, Joanna Briggs Institute (JBI) Reviewers Manual, Centre for Reviews and Dissemination, Critical Appraisal Skills Programme (CASP), Scottish Intercollegiate Guidelines Network (SIGN), and the National Institute for Clinical Excellence (NICE) up to May 20th, 2014. Two authors selected studies and extracted data; quantitative analysis was performed to summarize the characteristics of included tools. Results We included a total of 21 assessment tools for analysis. A number of tools were developed by academic organizations, and some were developed by only a small group of researchers. The JBI developed the highest number of methodological assessment tools, with CASP coming second. Tools for assessing the methodological quality of randomized controlled studies were most abundant. The Cochrane Collaboration's tool for assessing risk of bias is the best available tool for assessing RCTs. For cohort and case‐control studies, we recommend the use of the Newcastle‐Ottawa Scale. The Methodological Index for Non‐Randomized Studies (MINORS) is an excellent tool for assessing non‐randomized interventional studies, and the Agency for Healthcare Research and Quality (ARHQ) methodology checklist is applicable for cross‐sectional studies. For diagnostic accuracy test studies, the Quality Assessment of Diagnostic Accuracy Studies‐2 (QUADAS‐2) tool is recommended; the SYstematic Review Centre for Laboratory animal Experimentation (SYRCLE) risk of bias tool is available for assessing animal studies; Assessment of Multiple Systematic Reviews (AMSTAR) is a measurement tool for systematic reviews/meta‐analyses; an 18‐item tool has been developed for appraising case series studies, and the Appraisal of Guidelines, Research and Evaluation (AGREE)‐II instrument is widely used to evaluate clinical practice guidelines. Conclusions We have successfully identified a variety of methodological assessment tools for different types of study design. However, further efforts in the development of critical appraisal tools are warranted since there is currently a lack of such tools for other fields, e.g. genetic studies, and some existing tools (nested case‐control studies and case reports, for example) are in need of updating to be in line with current research practice and rigor. In addition, it is very important that all critical appraisal tools remain subjective and performance bias is effectively avoided. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Evidence Based Medicine Wiley

The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta‐analysis, and clinical practice guideline: a systematic review

Loading next page...
 
/lp/wiley/the-methodological-quality-assessment-tools-for-preclinical-and-SpOQeqEYwe

References (105)

Publisher
Wiley
Copyright
Copyright © 2015 Chinese Cochrane Center, West China Hospital of Sichuan University and Wiley Publishing Asia Pty Ltd
ISSN
1756-5383
eISSN
1756-5391
DOI
10.1111/jebm.12141
pmid
25594108
Publisher site
See Article on Publisher Site

Abstract

Objective To systematically review the methodological assessment tools for pre‐clinical and clinical studies, systematic review and meta‐analysis, and clinical practice guideline. Methods We searched PubMed, the Cochrane Handbook for Systematic Reviews of Interventions, Joanna Briggs Institute (JBI) Reviewers Manual, Centre for Reviews and Dissemination, Critical Appraisal Skills Programme (CASP), Scottish Intercollegiate Guidelines Network (SIGN), and the National Institute for Clinical Excellence (NICE) up to May 20th, 2014. Two authors selected studies and extracted data; quantitative analysis was performed to summarize the characteristics of included tools. Results We included a total of 21 assessment tools for analysis. A number of tools were developed by academic organizations, and some were developed by only a small group of researchers. The JBI developed the highest number of methodological assessment tools, with CASP coming second. Tools for assessing the methodological quality of randomized controlled studies were most abundant. The Cochrane Collaboration's tool for assessing risk of bias is the best available tool for assessing RCTs. For cohort and case‐control studies, we recommend the use of the Newcastle‐Ottawa Scale. The Methodological Index for Non‐Randomized Studies (MINORS) is an excellent tool for assessing non‐randomized interventional studies, and the Agency for Healthcare Research and Quality (ARHQ) methodology checklist is applicable for cross‐sectional studies. For diagnostic accuracy test studies, the Quality Assessment of Diagnostic Accuracy Studies‐2 (QUADAS‐2) tool is recommended; the SYstematic Review Centre for Laboratory animal Experimentation (SYRCLE) risk of bias tool is available for assessing animal studies; Assessment of Multiple Systematic Reviews (AMSTAR) is a measurement tool for systematic reviews/meta‐analyses; an 18‐item tool has been developed for appraising case series studies, and the Appraisal of Guidelines, Research and Evaluation (AGREE)‐II instrument is widely used to evaluate clinical practice guidelines. Conclusions We have successfully identified a variety of methodological assessment tools for different types of study design. However, further efforts in the development of critical appraisal tools are warranted since there is currently a lack of such tools for other fields, e.g. genetic studies, and some existing tools (nested case‐control studies and case reports, for example) are in need of updating to be in line with current research practice and rigor. In addition, it is very important that all critical appraisal tools remain subjective and performance bias is effectively avoided.

Journal

Journal of Evidence Based MedicineWiley

Published: Feb 1, 2015

There are no references for this article.