J Interdiscip Dentistry
Home | About JID | Editors | Search | Ahead of print | Current Issue | Archives | Instructions |
Home Print this page Email this page Small font sizeDefault font sizeIncrease font size
Users Online: 205  | Login  | Contact us | Advertise | Subscribe  


 
Table of Contents
REVIEW ARTICLE
Year : 2012  |  Volume : 2  |  Issue : 3  |  Page : 158-163

Research design hierarchy: Strength of evidence in evidence-based dentistry


Department of Public Health Dentistry, Manipal College of Dental Sciences, Manipal University, Mangalore, Karnataka, India

Date of Web Publication11-Jun-2013

Correspondence Address:
B H Mithun Pai
Department of Public Health Dentistry, Manipal College of Dental Sciences, Manipal University, Mangalore, Karnataka
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/2229-5194.113243

Rights and Permissions
   Abstract 

As practitioners, thinking critically about how we make clinical decisions is important. As educators, we should evaluate how to teach students to make clinical decisions. To make clinical decisions, and to practice modern dentistry and to educate the dental care professionals, the evidence-based dentistry forms an important asset. The cornerstone of evidence-based healthcare and health technology assessment is critical appraisal of the evidence underpinning a finding. The hierarchy of evidence includes several types of studies used to evaluate treatment effects, starting from case reports, observational studies, and randomized controlled trials (RCTs), the tip of which are systematic reviews, which constitute the highest level of evidence because they attempt to collect, combine, and report the best available evidence using systematic, transparent, and reproducible methodology. Clinicians are interested in the highest quality research report available to determine the "best therapy" for their patients. This article will assist in framing the questions and categorizing the best available evidence. A search was initiated to locate original research articles, review articles, and case reports pertaining to the key words: Evidence-based dentistry, hierarchy of evidence, ladder of evidence, research design hierarchy, strength of evidence. Electronic database was retrieved from PubMed , Google and Google Scholar to search and select keywords related to evidence-based medicine and dentistry. The keywords used were evidence based dentistry, research design hierarchy, evidence based practice, and strength of evidence. This article is the result of a literature study on evidence-based research design hierarchy.
Clinical Relevance to Interdisciplinary Dentistry

  • Evidence-based practice (EBP) is an interdisciplinary approach gaining ground after 1992; hence, its usefulness in any discipline is worth the attention.
  • This article explores links between the state of academic and clinical training regarding interdisciplinary EBP and describes strategies to accelerate the translation of evidence across disciplines.
  • This paper examines the concept of hierarchy of research design, barriers and challenges and applying evidence based dentistry in practice.

Keywords: Evidence-based dentistry, hierarchy of evidence, ladder of evidence, research design hierarchy, strength of evidence


How to cite this article:
Mithun Pai B H, Rajesh G, Shenoy R. Research design hierarchy: Strength of evidence in evidence-based dentistry. J Interdiscip Dentistry 2012;2:158-63

How to cite this URL:
Mithun Pai B H, Rajesh G, Shenoy R. Research design hierarchy: Strength of evidence in evidence-based dentistry. J Interdiscip Dentistry [serial online] 2012 [cited 2023 Mar 28];2:158-63. Available from: https://www.jidonline.com/text.asp?2012/2/3/158/113243


   Introduction Top

"It is not what the man of science believes that distinguishes him, but how and why he believes it. His beliefs are tentative, not dogmatic; they are based on evidence, not on authority or intuition." As rightly quoted by Bernard Russell, which stands testimony for need of evidence-based approach in science.

The advocate of this set of pages was initiated by Sackett and colleagues who generated "levels of evidence" for ranking the validity of evidence about the value of preventive maneuvers, and then tied them as "grades of recommendations." [1]

Each day, consciously and unconsciously we make the decisions regarding our patients' care. To make clinical decisions, almost instinctively, we rely on a wealth of resources including our own clinical experiences, discussion with colleagues; we rely on text books, journal articles, and previous educational experiences. As practitioners, thinking critically about how we make clinical decisions is important. As educators, we should evaluate how to teach students to make clinical decisions. To make clinical decisions, and to practice modern dentistry and to educate the dental care professionals, evidence-based dentistry (EBD) forms an important asset. [2]


   Evidence-based Dentistry? Top


The foundation for evidence-based practice was laid by David Sackett who defined it as "integrating individual clinical expertise with the best available external clinical evidence from systematic research." [2]

The American Dental Association (ADA) defines EBD as "an approach to oral health care that requires the judicious integration of systematic assessments of clinically relevant scientific evidence, relating to the patient's oral and medical condition and history, with the dentist's clinical expertise and patient's treatment needs and preferences." [3]

The evidence-based movement first took hold in the medical field. Formally introduced in the 1990s by David Sackett and Gordon Guyatt of McMaster University, evidence-based medicine outlines a methodical way to incorporate the best available evidence into the decision-making process for clinical practice and patient treatments. [4]

The cornerstone of evidence-based healthcare and health technology assessment is critical appraisal of the evidence underpinning a finding. Different methods are available for assessing the quality of the evidence, including ranking the body of evidence according to a hierarchy which indicates the level of bias associated with studies. The development of rules and regulations for evaluation of interventions was motivated by past mistakes such as the thalidomide crisis that caused great harm to several thousands of people. [5]

The hierarchy of evidence includes several types of studies used to evaluate treatment effects, starting from case reports, observational studies, and randomized controlled trials (RCTs), the tip of which are systematic reviews, which constitute the highest level of evidence because they attempt to collect, combine, and report the best available evidence using systematic, transparent, and reproducible methodology. The hierarchy of evidence has helped in assessing the quality of evidence and has been pivotal in translating the available evidence into clinical practice. [5]


   The Hierarchy of Evidence Top


Evidence-based practice involves tracking down the available evidence, assessing its validity, and then using the "best" evidence to inform decisions regarding care. Rules of evidence have been established to grade evidence according to its strength. Systematic reviews and RCTs represent the highest levels of evidence, whereas case reports and expert opinion are the lowest. This "ladder of evidence" was developed to a large extent for questions related to interventions or therapy.

Case reports and case series

Case reports and case series often provide a richness of information which cannot be conveyed in a trial. They permit discovery of new diseases and unexpected effects (adverse or beneficial), as well as the study of mechanisms of action. Case reports and series have a high sensitivity for detecting novelty, and therefore remain one of the cornerstones of medical progress. Case studies and case series form the lowest rungs of the evidence ladder. The adjective lowest rungs does not refer to the quality or value of the evidence, but rather as to how such evidence is valued when it is used as a basis for making clinical decisions for humans, as isolated observations are collected in an uncontrolled, unsystematic manner, and the information gained cannot be generalized to a larger population of patients. [6],[7],[8]

Cross-sectional studies

This design attempts to establish an association between a possible causal factor and a condition by determining exposure to the factor and "caseness" at the same time. Although this type of study is relatively easy and inexpensive to carry out and ethically acceptable, it can only establish an association, not a cause-and-effect relationship, can document the co-occurrence of disease and suspected risk factors in specific individuals. It is also useful for studying chronic diseases which have a high prevalence but an incidence that is too low to make a cohort study feasible. [6],[9]

Case-control studies

Case-control studies provide a relatively simple way to investigate causes of diseases, especially rare diseases. They include people with a disease (or other outcome variable) of interest and a suitable control (comparison or reference) group of people unaffected by the disease or outcome variable. The study compares the occurrence of the possible cause in cases and in controls. The investigators collect data on disease occurrence at one point in time and exposures at a previous point in time. They are often conducted for the purpose of identifying variables that might predict the condition. [10],[11],[12]

The major advantage of the case-control design over other basic designs is its efficiency for studying rare diseases, especially diseases with long latent periods. A greater proportion of study costs for collecting exposure and covariate data can be devoted to cases rather than expending most available resources on non-cases. Thus, given a fixed sample size, case-control sampling in a study of a rare disease enhances the precision and power for estimating and testing the exposure effect. [13]

Cohort studies (prospective cohort, concurrent cohort)

This is a study in which a defined group of people (the cohort) are followed over time. The outcomes of people in subsets of this cohort are compared to examine people who were exposed to or not exposed (or exposed at different levels) to a particular intervention or other factor of interest. A prospective cohort study assembles participants and follows them into the future. [12] As cohort studies start with exposed and unexposed people, the difficulty of measuring or finding existing data on individual exposures largely determines the feasibility of doing one of these studies. If the disease is rare in the exposed group as well as the unexposed group, there may also be problems in obtaining a large enough study group. [10]

The major strengths of this design derive from the fact that disease occurs and is detected after subjects are selected and after exposure status is measured. Another major strength of the cohort design is the usual lack of selection bias that threatens other basic designs. Selection bias is most likely to be problematic when the investigator does not identify the base population from which study cases arose (as in cross-sectional studies and certain case-control studies). [13]

Historical cohort studies

Costs can occasionally be reduced by using a historical cohort (identified on the basis of records of previous exposure). This type of investigation is called a historical cohort study because all the exposure and effect (disease) data have been collected before the actual study begins. [10]

Nested case-control studies

The nested case-control design makes cohort studies less expensive. The cases and controls are both chosen from a defined cohort, for which some information on exposures and risk factors is already available. Additional information on new cases and controls, particularly selected for the study, is collected and analyzed. This design is particularly useful when measurement of exposure is expensive. [10]

Randomized controlled trials

RCTs are the gold standard by which all clinical research is judged. [6] This is the strongest type of experimental design in which subjects are randomly assigned to experimental and control groups to support cause and effect relationships. [14] RCTs are considered the criterion standard for the assessment of whether a treatment or intervention is efficacious - in other words, whether the treatment "works" under ideal conditions. In RCTs, in which subjects are randomly allocated to receive active treatment or a control (placebo, no treatment, or even an active control), each patient has an equal chance of assignment to intervention or control group. Randomization produces comparable study groups in terms of measured and unmeasured variables apart from the intervention itself. Thus, confounding is not a threat to the internal validity of a properly conducted RCT, and any differences in outcome that occur between groups can be attributed to the intervention. RCTs some provide enough evidence that can be incorporated directly into clinical practice. [7]

Despite this significant strength, RCTs also have important limitations. They are typically expensive, time consuming, and designed to answer a single question or small number of questions about treatment efficacy that are usually narrow in scope. Thus, the RCT design may be impractical or even inappropriate to address questions beyond therapeutic efficacy in a well-circumscribed population. [14]

Randomization of treatment allocation is what makes the RCT one of the simplest and most powerful tools of scientific research. In any study involving people, there are potentially many unknown factors, for example, genetic or lifestyle factors, which can have a bearing on the outcome. Randomization, if done properly, reduces the risk that these unknown factors will be seriously unbalanced in the various study groups. The allocation sequence must be randomly allocated. This can be by the flip of a coin, or more usually, by using random number tables or computer-generated sequences. [6]

Blinding is another key feature of RCTs. The "double-blind" trial is one in which both the researcher and the patient do not know whether the patient is in the experimental group or in the control group. This design is most useful when the control group is receiving an identical placebo drug or "sham" intervention, but falls down in many types of important studies. [6]

Quasi-experimental studies

In some situations, an experimental design is not feasible because of ethical or logistical constraints. [14] Quasi- experiments are studies that aim to evaluate interventions, but do not use randomization. Like randomized trials, quasi experiments aim to demonstrate causality between an intervention and an outcome. [15]

The main pitfalls of the quasi-experimental studies are standardized measurements may be biased by inability to maintain blindness to allocation code. There is increased likelihood that blindness cannot be maintained due to known allocation method or "broken code." There is a potential to violate statistical assumption of randomized allocation. [16]

Systematic reviews and meta-analyses

Systematic reviews and meta-analyses are considered the gold standard for evidence because of their strict protocols to reduce bias and to synthesize and analyze already completed studies. A systematic review is a scientific tool that can be used to appraise, summarize, and project results and implications of otherwise unmanageable quantities of research. By this, healthcare providers can evaluate existing or new technologies and practices efficiently and consider the totality of available evidence. Systematic reviews are of particular value in bringing together a number of separately conducted studies, sometimes with conflicting findings, and synthesizing the results. Systematic reviews may or may not include a statistical synthesis called meta-analysis, depending on whether the studies are similar enough so that combining their results would result in a definite solution. Although meta-analysis is often used interchangeably with systematic review, strictly speaking, a meta-analysis is an optional component of a systematic review. [17]

Systematic reviews appear at the top of the hierarchy of evidence. This reflects the fact that when rigorously conducted, they should give us the best possible estimate of any true effect. However, caution must be exercised before accepting the findings of any systematic review without first appraising it. Like any piece of research, a systematic review may be done poorly. Not all systematic reviews are rigorous and unbiased. Little attention may have been paid to the intervention, the patient selection group, or the search strategy. Systematic reviews offer evidence that is as good as the best available evidence summarized by the review. For example, for a given research question, high-quality systematic reviews including high-quality trials would yield stronger inferences than the systematic reviews of lower quality trials or well-conducted observational studies. Stronger inferences will also be drawn when the studies in the review show consistent answers or when the inconsistency can be explained (often through subgroup analyses). Thus, systematic reviews contribute by improving the applicability of the evidence, and through meta-analyses, by increasing the precision of the estimates of treatment effect. [19]

A hierarchy of study designs has been proposed on the basis of strength of evidence that these studies provide for causality, and experimental studies, especially RCTs, reside at the pinnacle. This hierarchy is most appropriate in evaluating the efficacy of a treatment or intervention. However, experimental studies are not always feasible or appropriate, and are often not well suited to answer important questions, such as those on the safety and effectiveness of therapies in the real world. It is when asking questions about therapy that we should try to avoid the non-experimental approaches, since these routinely lead to false-positive conclusions about efficacy. [14] Because the randomized trial, and especially the systematic review of several randomized trials, is so much more likely to inform us and so much less likely to mislead us, it has become the "gold standard" for judging whether a treatment does more good than harm. However, some questions about therapy do not require randomized trials (successful interventions for otherwise fatal conditions) or cannot wait for the trials to be conducted. The study designs best suited for the type of design and objective of the study have been described in [Table 1]. [20] And if no randomized trial has been carried out for our patient's predicament, we must follow the trail to the next best external evidence and work from there. [14],[21]
Table 1: Study designs best suited for the type of study design and objective of the study

Click here to view



   Grading the Strength of Evidence Top


Guyatt et al. developed an optimal grading system based on the idea that guideline panels should make recommendations to administer or not administer an intervention on the basis of a trade-off between benefits on the one hand and risks, burdens, and potential costs on the other. They provided recommendations at two levels, strong and weak, as illustrated in [Table 2]. A Grade 1 recommendation (strong) is if guideline panels are very certain that benefits do or do not outweigh the risks and burdens. A Grade 2 (weak) recommendation is if panels think that the benefits and the risks and burdens are finely balanced or applicable and uncertainties exist above the magnitude of the benefits and risks. [17] ([Table 2] has been adapted from Guyatt et al.'s Grading strength of recommendations and quality of evidence in clinical guidelines.) [22]
Table 2: Grading strength of recommendations and quality of evidence in clinical guidelines

Click here to view



   Discussion Top


A hierarchy of study designs has been proposed on the basis of strength of evidence that these studies provide for causality, and experimental studies. The so-called Nepotism toward same kinds of study designs, especially if the other study designs are better, logical, and sometimes more feasible, stands testimony of researchers' stalled move with the times. However, experimental studies are not always feasible or appropriate and are often not well suited to answer important questions, such as those on the safety and effectiveness of therapies in individuals, the impact of risk factors on outcomes, or the effects of policy outcomes. The ultimate interpretation of the medical literature requires not only the understanding of the strengths and limitations of different study designs, but also an appreciation for the circumstances in which the traditional rigid hierarchy does not apply, and integration of complementary information derived from various study designs is needed for holistic approach toward any question that is deemed important for the welfare of the people and the community.

The purpose of this article is to highlight the important features of research design that clinicians can use to determine which articles are useful when attempting to answer clinical questions. This article offers a systematic means of categorizing the quality of research reports for clinicians and clinical investigators. Clinical investigators are interested in the entire continuum of research reports, as they work to define the definitive research question and research method. Clinicians are interested in the highest quality research report available to determine the "best therapy" for their patients. This article will assist in framing the questions and categorizing the best available evidence.

 
   References Top

1.Ballini A, Capodiferro S, Toia M, Cantore S, Favia G, De Frenza G, et al. Evidence-based dentistry: What′s new? Int J Med Sci 2007;4:174-8.  Back to cited text no. 1
    
2.Goldstein GR. What is evidence based dentistry? Dent Clin North Am 2002;46:1-9.  Back to cited text no. 2
    
3.Hackshaw AK, Paul EA, Davenport ES. Evidence-based dentistry: An introduction. Oxford: Blackwell Munksgaard; 2006. p. 1-2.  Back to cited text no. 3
    
4.Rabb-Waytowich D. Evidence-based dentistry: Part 1. An overview. J Can Dent Assoc 2009;75:27-8.  Back to cited text no. 4
    
5.Pandis N. The evidence pyramid and introduction to randomized controlled trials. Am J Orthod Dentofacial Orthop 2011;140:446-7.  Back to cited text no. 5
    
6.Sutherland SE. Evidence-based dentistry: Part VI. Critical Appraisal of the Dental Literature: Papers about Diagnosis, Etiology and Prognosis. J Can Dent Assoc 2001;67:582-5.  Back to cited text no. 6
    
7.Mckeon PO, Medina JM, Hertel J. The hierarchy of research design evidence in sports medicine. Athl Ther Today 2006;11:42-5.  Back to cited text no. 7
    
8.Hujoel P. Grading the evidence: The core of EBD.J Evid Based Dent Pract 2009;9:122-4.  Back to cited text no. 8
    
9.Zaccai JH. How to assess epidemiological studies. Postgrad Med J 2004;80:140-7.  Back to cited text no. 9
    
10.Bonita R, Beaglehole R, Kjellström T. Basic epidemiology. 2 nd ed. Geneva: World Health Organization; 2006.  Back to cited text no. 10
    
11.Health Policy and Clinical Effectiveness/Center for Professional Excellence/Pratt Library. EBDM Tools and Resources. Available from: http://groups/ce/NewEBC/EBCFiles/GLOSSARY-EBDM.pdf. [Last accessed on 2012 Jun 22].  Back to cited text no. 11
    
12.Jackson SF, Fazal N, Giesbrecht N. A hierarchy of evidence: Which intervention has the strongest evidence of effectiveness? Available from: http://datapdf.net/a-hierarchy-of-evidence.html. [Last accessed on 2012 Jul 04].  Back to cited text no. 12
    
13.Morgenstern H, Thomas D. Principles of study design in environmental epidemiology. Environ Health Perspect 1993;101:23-38.  Back to cited text no. 13
    
14.Ho PM, Peterson PN, Masoudi FA. Evaluating the evidence: Is there a rigid hierarchy? Circulation 2008;118:1675-84.  Back to cited text no. 14
    
15.Harris AD, Bradham DD, Baumgarten M, Zuckerman IH, Fink JC, 4 Perencevich EN. The use and interpretation of quasi-experimental studies in infectious diseases. Clin Infect Dis 2004;38:1586-91.  Back to cited text no. 15
    
16.Jacob RF, Carr AB. Hierarchy of research design used to categorize the "strength of evidence" in answering clinical dental questions. J Prosthet Dent 2000;83:137-52.  Back to cited text no. 16
    
17. Green S. Systematic reviews and meta-analysis. Singapore Med J 2005;46:270-3.  Back to cited text no. 17
    
18.Elamin MB, Montori VM. The hierarchy of evidence: From unsystematic clinical observations to systematic reviews. In: Burneo JG, Editor. Neurology: An Evidence-Based Approach. United States of America: Springer Science, Business Media, LLC; 2012. p. 18.  Back to cited text no. 18
    
19.Manchikanti L, Benyamin R, Helm S, Hirsch J. Evidence-based medicine, systematic reviews, and guidelines in interventional pain management: Part 3: Systematic reviews and meta-analyses of randomized trials. Pain Physician 2009;12:35-72.  Back to cited text no. 19
    
20.Available from: http://www.hsl.unc.edu/Services/Tutorials/EBM/Supplements/QuestionSupplement.htm. Last accessed on 2012 Jun 22  Back to cited text no. 20
    
21.Worrall J. Evidence in medicine and evidence-based medicine. Philos Compass 2007;2:981-1022.   Back to cited text no. 21
    
22.Guyatt G, Gutterman D, Baumann MH, Addrizzo-Harris D, Hylek EM, Phillips B, et al. Grading strength of recommendations and quality of evidence in clinical guidelines. Report from an American College of Chest Physicians task force. Chest 2006;129:174-81.  Back to cited text no. 22
    



 
 
    Tables

  [Table 1], [Table 2]


This article has been cited by
1 Annual review of selected scientific literature: Report of the Committee on Scientific Investigation of the American Academy of Restorative Dentistry
Terence E. Donovan,Riccardo Marzola,William Becker,David R. Cagna,Frederick Eichmiller,James R. McKee,James E. Metz,Jean-Pierre Albouy
The Journal of Prosthetic Dentistry. 2014; 112(5): 1038
[Pubmed] | [DOI]



 

Top
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
    Abstract
   Introduction
   Introduction
    Evidence-based D...
    The Hierarchy of...
    Grading the Stre...
   Discussion
    References
    Article Tables

 Article Access Statistics
    Viewed14481    
    Printed447    
    Emailed0    
    PDF Downloaded1154    
    Comments [Add]    
    Cited by others 1    

Recommend this journal