Improving clinical practice using clinical decision support systems: a
http://www.100md.com
《英国医生杂志》
1 Division of Clinical Informatics, Department of Community and Family Medicine, Box 2914, Duke University Medical Center, Durham, NC 27710, USA, 2 College of Health Sciences, Old Dominion University, Norfolk, VA 23529, USA
Correspondence to: David F Lobach david.lobach@duke.edu
Objective To identify features of clinical decision support systems critical for improving clinical practice.
Design Systematic review of randomised controlled trials.
Data sources Literature searches via Medline, CINAHL, and the Cochrane Controlled Trials Register up to 2003; and searches of reference lists of included studies and relevant reviews.
Study selection Studies had to evaluate the ability of decision support systems to improve clinical practice.
Data extraction Studies were assessed for statistically and clinically significant improvement in clinical practice and for the presence of 15 decision support system features whose importance had been repeatedly suggested in the literature.
Results Seventy studies were included. Decision support systems significantly improved clinical practice in 68% of trials. Univariate analyses revealed that, for five of the system features, interventions possessing the feature were significantly more likely to improve clinical practice than interventions lacking the feature. Multiple logistic regression analysis identified four features as independent predictors of improved clinical practice: automatic provision of decision support as part of clinician workflow (P < 0.00001), provision of recommendations rather than just assessments (P = 0.0187), provision of decision support at the time and location of decision making (P = 0.0263), and computer based decision support (P = 0.0294). Of 32 systems possessing all four features, 30 (94%) significantly improved clinical practice. Furthermore, direct experimental justification was found for providing periodic performance feedback, sharing recommendations with patients, and requesting documentation of reasons for not following recommendations.
Conclusions Several features were closely correlated with decision support systems' ability to improve patient care significantly. Clinicians and other stakeholders should implement clinical decision support systems that incorporate these features whenever feasible and appropriate.
Recent research has shown that health care delivered in industrialised nations often falls short of optimal, evidence based care. A nationwide audit assessing 439 quality indicators found that US adults receive only about half of recommended care,1 and the US Institute of Medicine has estimated that up to 98 000 US residents die each year as the result of preventable medical errors.2 Similarly a retrospective analysis at two London hospitals found that 11% of admitted patients experienced adverse events, of which 48% were judged to be preventable and of which 8% led to death.3
To address these deficiencies in care, healthcare organisations are increasingly turning to clinical decision support systems, which provide clinicians with patient-specific assessments or recommendations to aid clinical decision making.4 Examples include manual or computer based systems that attach care reminders to the charts of patients needing specific preventive care services and computerised physician order entry systems that provide patient-specific recommendations as part of the order entry process. Such systems have been shown to improve prescribing practices,5-7 reduce serious medication errors,8 9 enhance the delivery of preventive care services,10 11 and improve adherence to recommended care standards.4 12 Compared with other approaches to improve practice, these systems have also generally been shown to be more effective and more likely to result in lasting improvements in clinical practice.13-22
Clinical decision support systems do not always improve clinical practice, however. In a recent systematic review of computer based systems, most (66%) significantly improved clinical practice, but 34% did not.4 Relatively little sound scientific evidence is available to explain why systems succeed or fail.23 24 Although some investigators have tried to identify the system features most important for improving clinical practice,12 25-34 they have typically relied on the opinion of a limited number of experts, and none has combined a systematic literature search with quantitative meta-analysis. We therefore systematically reviewed the literature to identify the specific features of clinical decision support systems most crucial for improving clinical practice.
Methods
Data sources
We searched Medline (1966-2003), CINAHL (1982-2003), and the Cochrane Controlled Trials Register (2003) for relevant studies using combinations of the following search terms: decision support systems, clinical; decision making, computer-assisted; reminder systems; feedback; guideline adherence; medical informatics; communication; physician's practice patterns; reminder$; feedback$; decision support$; and expert system. We also systematically searched the reference lists of included studies and relevant reviews.
Inclusion and exclusion criteria
We defined a clinical decision support system as any electronic or non-electronic system designed to aid directly in clinical decision making, in which characteristics of individual patients are used to generate patient-specific assessments or recommendations that are then presented to clinicians for consideration.4 We included both electronic and non-electronic systems because we felt the use of a computer represented only one of many potentially important factors. Our inclusion criteria were any randomised controlled trial evaluating the ability of a clinical decision support system to improve an important clinical practice in a real clinical setting; use of the system by clinicians (physicians, physician assistants, or nurse practitioners) directly involved in patient care; and assessment of improvements in clinical practice through patient outcomes or process measures. Our exclusion criteria were less than seven units of randomisation per study arm; study not in English; mandatory compliance with decision support system; lack of description of decision support content or of clinician interaction with system; and score of less than five points on a 10 point scale assessing five potential sources of study bias.4
Study selection
Two authors independently reviewed the titles, index terms, and s of the identified references and rated each paper as "potentially relevant" or "not relevant" using a screening algorithm based on study type, study design, subjects, setting, and intervention. Two authors then independently reviewed the full texts of the selected articles and again rated each paper as "potentially relevant" or "not relevant" using the screening algorithm. Finally, two authors independently applied the full set of inclusion and exclusion criteria to the potentially relevant studies to select the final set of included studies. Disagreements between reviewers were resolved by discussion, and we measured inter-rater agreement using Cohen's unweighted statistic.35
Data ion
A study may include several trial arms, so that multiple comparisons may exist within the single study. For each relevant comparison, two reviewers independently assessed whether the clinical decision support system resulted in an improvement in clinical practice that was both statistically and clinically significant. In some cases changes in practice characterised as clinically significant by the study authors were deemed non-significant by the reviewers. We considered effect size as an alternative outcome measure but concluded that the use of effect size would have been misleading given the significant heterogeneity among the outcome measures reported by the included studies. We also anticipated that the use of effect size would have led to the exclusion of many relevant trials, as many studies fail to report all of the statistical elements necessary to accurately reconstruct effect sizes.
Next, two reviewers independently determined the presence or absence of specific features of decision support systems that could potentially explain why a system succeeded or failed. To construct a set of potential explanatory features, we systematically examined all relevant reviews and primary studies identified by our search strategy and recorded any factors suggested to be important for system effectiveness. Both technical and non-technical factors were eligible for inclusion. Also, if a factor was designated as a barrier to effectiveness (such as "the need for clinician data entry limits system effectiveness") we treated the logically opposite concept as a potential success factor (such as "removing the need for clinician data entry enhances system effectiveness"). Next, we limited our consideration to features that were identified as being potentially important by at least three sources, which left us with 22 potential explanatory features, including general system features, system-clinician interaction features, communication content features, and auxiliary features (tables 1 and 2). Of these 22 features, 15 could be included into our analysis (table 1) because their presence or absence could be reliably ed from most studies, whereas the remaining seven could not (table 2).
Table 1 Descriptions of the 15 features of clinical decision support systems (CDSS) included in statistical analyses
Table 2 The seven potential explanatory features of clinical decision support systems (CDSS) that could not be included in the statistical analyses
Data synthesis
We used three methods to identify clinical decision support system features important for improving clinical practice.
Univariate analyses—For each of the 15 selected features we individually determined whether interventions possessing the feature were significantly more likely to succeed (result in a statistically and clinically significant improvement in clinical practice) than interventions lacking the feature. We used StatXact55 to calculate 95% confidence intervals for individual success rates56 and for differences in success rates.57
Multiple logistic regression analyses—For these analyses, the presence or absence of a statistically and clinically significant improvement in clinical practice constituted the binary outcome variable, and the presence or absence of specific decision support system features constituted binary explanatory variables. We included only cases in which the clinical decision support system was compared against a true control group. For the primary meta-regression analysis, we pooled the results from all included studies, so as to maximise the power of the analysis while decreasing the risk of false positive findings from over-fitting of the model.58 We also conducted separate secondary regression analyses for computer based systems and for non-electronic systems. For all analyses, we included one indicator for the decision support subject matter (acute care v non-acute care) and two indicators for the study setting (academic v non-academic, outpatient v inpatient) to assess the role of potential confounding factors related to the study environment. With the 15 system features and the three environmental factors constituting the potential explanatory variables, we conducted logistic regression analyses using LogXact-5.59 Independent predictor variables were included into the final models using forward selection and a significance level of 0.05.
Direct experimental evidence—We systematically identified studies in which the effectiveness of a given decision support system was directly compared with the effectiveness of the same system with additional features. We considered a feature to have direct experimental evidence supporting its importance if its addition resulted in a statistically and clinically significant improvement in clinical practice.
Results
Description of studies
Of 10 688 potentially relevant articles screened, 88 papers describing 70 studies met all our inclusion and exclusion criteria (figure).w1-w88 Inter-rater agreements for study selection and data ion were satisfactory (table 3). The 70 studies contained 82 relevant comparisons, of which 71 compared a clinical decision support system with a control group (control-system comparisons) and 11 directly compared a system with the same system plus extra features (system-system comparisons). We used the control-system comparisons to identify system features statistically associated with successful outcomes and the system-system comparisons to identify features with direct experimental evidence of their importance.
Selection process of trials of clinical decision support systems (CDSS) for review
Table 3 Inter-rater agreement for study selection and data ion in review of trials of clinical decision support systems (CDSS)
Table 4 describes the characteristics of the 70 included studies. Between them, about 6000 clinicians acted as study subjects while caring for about 130 000 patients. The commonest types of decision support system were computer based systems that provided patient-specific advice on printed encounter forms or on printouts attached to charts (34%),w2 w4 w6 w8-w10 w12 w14 w19 w22-w28 w32 w34-w49 non-electronic systems that attached patient-specific advice to appropriate charts (26%),w1 w29-w31 w50-w66 and systems that provided decision support within computerised physician order entry systems (16%).w3 w7 w11 w15-w17 w20 w67-w71
Table 4 Characteristics of the 70 studies of clinical decision support systems (CDSS) included in review
Univariate analyses of clinical decision support system features
Table 5 summarises the success rates of clinical decision support systems with and without the 15 potentially important features. Overall, 48 of the 71 decision support systems (68% (95% confidence interval 56% to 78%)) significantly improved clinical practice. For five of the 15 features, the success rate of interventions possessing the feature was significantly greater than that of interventions lacking the feature.
Table 5 Success rates* of clinical decision support systems (CDSS) with and without 15 potentially important features. Results of 71 control-CDSS comparisons
Most notably, 75% of interventions succeeded when the decision support was provided to clinicians automatically, whereas none succeeded when clinicians were required to seek out the advice of the decision support system (rate difference 75% (37% to 84%)). Similarly, systems that were provided as an integrated component of charting or order entry systems were significantly more likely to succeed than stand alone systems (rate difference 37% (6% to 61%)); systems that used a computer to generate the decision support were significantly more effective than systems that relied on manual processes (rate difference 26% (2% to 49%)); systems that prompted clinicians to record a reason when not following the advised course of action were significantly more likely to succeed than systems that allowed the system advice to be bypassed without recording a reason (rate difference 41% (19% to 54%)); and systems that provided a recommendation (such as "Patient is at high risk of coronary artery disease; recommend initiation of blocker therapy") were significantly more likely to succeed than systems that provided only an assessment of the patient (such as "Patient is at high risk of coronary artery disease") (rate difference 35% (8% to 58%)).
Finally, systems that provided decision support at the time and location of decision making were substantially more likely to succeed than systems that did not provide advice at the point of care, but the difference in success rates fell just short of being significant at the 0.05 level (rate difference 48% (-0.46% to 70.01%)).
Meta-regression analysis
The univariate analyses evaluated each potential success factor in isolation from the other factors. We therefore conducted multivariate logistic regression analyses in order to identify independent predictors of clinical decision support system effectiveness while taking into consideration the presence of other potentially important factors. Table 6 shows the results of these analyses.
Table 6 Features of clinical decision support systems (CDSS) associated with improved clinical practice. Results of meta-regression analyses of 71 control-CDSS comparisons
Of the six features shown to be important by the univariate analyses, four were identified as independent predictors of system effectiveness by the primary meta-regression analysis. Most notably, this analysis confirmed the critical importance of automatically providing decision support as part of clinician workflow (P < 0.00001). The other three features were providing decision support at the time and location of decision making (P = 0.0263), providing a recommendation rather than just an assessment (P = 0.0187), and using a computer to generate the decision support (P = 0.0294). Among the 32 clinical decision support systems incorporating all four features,w2-w6 w8-w10 w12 w16 w19 w20 w22 w24-w27 w32 w34-w49 w67 w69 w70 w88 30 (94% (80% to 99%)) significantly improved clinical practice. In contrast, clinical decision support systems lacking any of the four features improved clinical practice in only 18 out of 39 cases (46% (30% to 62%)). The subset analyses for computer based clinical decision support systems and for non-electronic clinical decision support systems yielded results consistent with the findings of the primary regression analysis (table 6).
Survey of direct experimental evidence
We identified 11 randomised controlled trials in which a clinical decision support system was evaluated directly against the same clinical decision support system with additional features (table 7).w14 w17 w19 w21 w22 w24-w26 w28 w38 w64 w86 In support of the regression results, one study found that system effectiveness was significantly enhanced when the decision support was provided at the time and location of decision making.w19 Similarly, effectiveness was enhanced when clinicians were required to document the reason for not following system recommendationsw14 and when clinicians were provided with periodic feedback about their compliance with system recommendations.w28 Furthermore, two of four studies found a significant beneficial effect when decision support results were provided to both clinicians and patients.w24-w26 w38 w86 In contrast, clinical decision support system effectiveness remained largely unchanged when critiques were worded more strongly and the evidence supporting the critiques was expanded to include institution-specific data,w17 when recommendations were made more specific,w21 when local clinicians were recruited into the system development process,w64 and when bibliographic citations were provided to support the recommendations made by the system.w22
Table 7 Details of 11 randomised controlled trials of clinical decision support systems (CDSS) that directly evaluated effectiveness of specific CDSS features
Discussion
w1-w88, the studies reviewed in this article, are on bmj.com
We thank Vic Hasselblad for his assistance with the statistical analyses.
Contributors and guarantor: KK, DFL, and EAB contributed to the study design. KK, CAH, and DFL contributed to the data ion. All authors contributed to the data analysis. KK managed the project and wrote the manuscript, and all authors contributed to the critical revision and final approval of the manuscript. DFL is guarantor.
Funding: This study was supported by research grants T32-GM07171 and F37-LM008161-01 from the National Institutes of Health, Bethesda, Maryland, USA; and by research grants R01-HS10472 and R03-HS10814 from the Agency for Healthcare Research and Quality, Rockville, Maryland, USA. These funders did not play a role in the design, execution, analysis, or publication of this study.
Competing interests: None declared.
Ethical approval: Not required.
References
McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003;348: 2635-45.
Kohn LT, Corrigan JM, Donaldson MS, eds. To err is human: building a safer health system. Washington, DC: National Academy Press, 1999.
Vincent C, Neale G, Woloshynowych M. Adverse events in British hospitals: preliminary retrospective record review. BMJ 2001;322: 517-9.
Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA 1998;280: 1339-46.
Bennett JW, Glasziou PP. Computerised reminders and feedback in medication management: a systematic review of randomised controlled trials. Med J Aust 2003;178: 217-22.
Walton RT, Harvey E, Dovey S, Freemantle N. Computerised advice on drug dosage to improve prescribing practice. Cochrane Database Syst Rev 2001;1: CD002894.
Walton R, Dovey S, Harvey E, Freemantle N. Computer support for determining drug dose: systematic review and meta-analysis. BMJ 1999;318: 984-90.
Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 2003;163: 1409-16.
Bates DW, Teich JM, Lee J, Seger D, Kuperman GJ, Ma'Luf N, et al. The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc 1999;6: 313-21.
Shea S, DuMouchel W, Bahamonde L. A meta-analysis of 16 randomized controlled trials to evaluate computer-based clinical reminder systems for preventive care in the ambulatory setting. J Am Med Inform Assoc 1996;3: 399-409.
Balas EA, Weingarten S, Barb CT, Blumenthal D, Boren SA, Brown GD. Improving preventive care by prompting physicians. Arch Intern Med 2000;160: 301-8.
Shiffman RN, Liaw Y, Brandt CA, Corb GJ. Computer-based guideline implementation systems: a systematic review of functionality and effectiveness. J Am Med Inform Assoc 1999;6: 104-14.
Thomson O'Brien MA, Oxman AD, Davis DA, Haynes RB, Freemantle N, Harvey EL. Audit and feedback versus alternative strategies: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2000;2: CD000260.
Hulscher ME, Wensing M, van der Weijden T, Grol R. Interventions to implement prevention in primary care. Cochrane Database Syst Rev 2001;1: CD000362.
Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. CMAJ 1995;153: 1423-31.
Kupets R, Covens A. Strategies for the implementation of cervical and breast cancer screening of women by primary care physicians. Gynecol Oncol 2001;83: 186-97.
Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD, Thomson MA. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. The Cochrane Effective Practice and Organization of Care Review Group. BMJ 1998;317: 465-8.
Mandelblatt J, Kanetsky PA. Effectiveness of interventions to enhance physician screening for breast cancer. J Fam Pract 1995;40: 162-71.
Wensing M, Grol R. Single and combined strategies for implementing changes in primary care: a literature review. Int J Qual Health Care 1994;6: 115-32.
Mandelblatt JS, Yabroff KR. Effectiveness of interventions designed to increase mammography use: a meta-analysis of provider-targeted strategies. Cancer Epidemiol Biomarkers Prev 1999;8: 759-67.
Stone EG, Morton SC, Hulscher ME, Maglione MA, Roth EA, Grimshaw JM, et al. Interventions that increase use of adult immunization and cancer screening services: a meta-analysis. Ann Intern Med 2002;136: 641-51.
Weingarten SR, Henning JM, Badamgarav E, Knight K, Hasselblad V, Gano A Jr, et al. Interventions used in disease management programmes for patients with chronic illness—which ones work? Meta-analysis of published reports. BMJ 2002;325: 925-32.
Kaplan B. Evaluating informatics applications—some alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inf 2001;64: 39-56.
Kanouse DE, Kallich JD, Kahan JP. Dissemination of effectiveness and outcomes research. Health Policy 1995;34: 167-92.
Wendt T, Knaup-Gregori P, Winter A. Decision support in medicine: a survey of problems of user acceptance. Stud Health Technol Inform 2000;77: 852-6.
Wetter T. Lessons learnt from bringing knowledge-based decision support into routine use. Artif Intell Med 2002;24: 195-203.
Sim I, Gorman P, Greenes RA, Haynes RB, Kaplan B, Lehmann H, et al. Clinical decision support systems for the practice of evidence-based medicine. J Am Med Inform Assoc 2001;8: 527-34.
Payne TH. Computer decision support systems. Chest 2000;118: 47-52S.
Shiffman RN, Brandt CA, Liaw Y, Corb GJ. A design model for computer-based guideline implementation based on information management services. J Am Med Inform Assoc 1999;6: 99-103.
Ash JS, Stavri PZ, Kuperman GJ. A consensus statement on considerations for a successful CPOE implementation. J Am Med Inform Assoc 2003;10: 229-34.
Trivedi MH, Kern JK, Marcee A, Grannemann B, Kleiber B, Bettinger T, et al. Development and implementation of computerized clinical guidelines: barriers and solutions. Methods Inf Med 2002;41: 435-42.
Solberg LI, Brekke ML, Fazio CJ, Fowles J, Jacobsen DN, Kottke TE, et al. Lessons from experienced guideline implementers: attend to many factors and use multiple strategies. Jt Comm J Qual Improv 2000;26: 171-88.
Bates DW, Kuperman GJ, Wang S, Gandhi T, Kittler A, Volk L, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 2003;10: 523-30.
Centre for Health Informatics, University of New South Wales. Appendix A: electronic decision support activities in different healthcare settings in Australia. In: National Electronic Decision Support Taskforce. Electronic decision support for Australia's health sector. Canberra: Commonwealth of Australia, 2003. www.ahic.org.au/downloads/nedsrept.pdf (accessed 28 Jan 2005).
Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas 1960;20: 37-46.
Aronsky D, Chan KJ, Haug PJ. Evaluation of a computerized diagnostic decision support system for patients with pneumonia: study design considerations. J Am Med Inform Assoc 2001;8: 473-85.
Ramnarayan P, Britto J. Paediatric clinical decision support systems. Arch Dis Child 2002;87: 361-2.
Ryff-de Leche A, Engler H, Nutzi E, Berger M, Berger W. Clinical application of two computerized diabetes management systems: comparison with the log-book method. Diabetes Res 1992;19: 97-105.
Hersh WR. Medical informatics: improving health care through information. JAMA 2002;288: 1955-8.
Bodenheimer T, Grumbach K. Electronic technology: a spark to revitalize primary care? JAMA 2003;290: 259-64.
Lowensteyn I, Joseph L, Levinton C, Abrahamowicz M, Steinert Y, Grover S. Can computerized risk profiles help patients improve their coronary risk? The results of the coronary health assessment study (CHAS). Prev Med 1998;27: 730-7.
Miller RA. Medical diagnostic decision support systems—past, present, and future: a threaded bibliography and brief commentary. J Am Med Inform Assoc 1994;1: 8-27.
Morris AH. Academia and clinic. Developing and implementing computerized protocols for standardization of clinical decisions. Ann Intern Med 2000;132: 373-83.
Tierney WM. Improving clinical decisions and outcomes with information: a review. Int J Med Inf 2001;62: 1-9.
Heathfield HA, Wyatt J. Philosophies for the design and development of clinical decision-support systems. Methods Inf Med 1993;32: 1-8.
Wyatt JR. Lessons learnt from the field trial of ACORN, an expert system to advise on chest pain. Proceedings of the Sixth World Conference on Medical Informatics, Singapore 1989: 111-5.
Stock JL, Waud CE, Coderre JA, Overdorf JH, Janikas JS, Heiniluoma KM, et al. Clinical reporting to primary care physicians leads to increased use and understanding of bone densitometry and affects the management of osteoporosis. A randomized trial. Ann Intern Med 1998;128: 996-9.
Frances CD, Alperin P, Adler JS, Grady D. Does a fixed physician reminder system improve the care of patients with coronary artery disease? A randomized controlled trial. West J Med 2001;175: 165-6.
Belcher DW, Berg AO, Inui TS. Practical approaches to providing better preventive care: are physicians a problem or a solution? Am J Prev Med 1988;4: 27-48.
McPhee SJ, Detmer WM. Office-based interventions to improve delivery of cancer prevention services by primary care physicians. Cancer 1993;72: 1100-12.
Strecher VJ, O'Malley MS, Villagra VG, Campbell EE, Gonzalez JJ, Irons TG, et al. Can residents be trained to counsel patients about quitting smoking? Results from a randomized trial. J Gen Intern Med 1991;6: 9-17.
Shannon KC, Sinacore JM, Bennett SG, Joshi AM, Sherin KM, Deitrich A. Improving delivery of preventive health care with the comprehensive annotated reminder tool (CART). J Fam Pract 2001;50: 767-71.
Delaney BC, Fitzmaurice DA, Riaz A, Hobbs FD. Can computerised decision support systems deliver improved quality in primary care? BMJ 1999;319: 1281-3.
Weir CJ, Lees KR, MacWalter RS, Muir KW, Wallesch CW, McLelland EV, et al. Cluster-randomized, controlled trial of computer-based decision support for selecting long-term anti-thrombotic therapy after acute ischaemic stroke. QJM 2003;96: 143-53.
StatXact . Version 6.2.0. Cambridge, MA: Cytel Software, 2004.
Casella G. Refining binomial confidence intervals. CanJStat 1986;14: 113-29.
Agresti A, Min Y. On small-sample confidence intervals for parameters in discrete distributions. Biometrics 2001;57: 963-71.
Green SB. How many subjects does it take to do a regression analysis? Multivariate Behav Res 1991;26: 499-510.
LogXact . Version 5.0. Cambridge, MA: Cytel Software, 2002.
Harrell FE Jr, Lee KL, Mark DB. Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat Med 1996;15: 361-87.
Grol R. Personal paper: beliefs and evidence in changing clinical practice. BMJ 1997;315: 418-21.
Freemantle N, Grilli R, Grimshaw J, Oxman A. Implementing findings of medical research: the Cochrane Collaboration on Effective Professional Practice. Qual Health Care 1995;4: 45-7.
((Kensaku Kawamoto, fellow1, Caitlin A Hou)
Correspondence to: David F Lobach david.lobach@duke.edu
Objective To identify features of clinical decision support systems critical for improving clinical practice.
Design Systematic review of randomised controlled trials.
Data sources Literature searches via Medline, CINAHL, and the Cochrane Controlled Trials Register up to 2003; and searches of reference lists of included studies and relevant reviews.
Study selection Studies had to evaluate the ability of decision support systems to improve clinical practice.
Data extraction Studies were assessed for statistically and clinically significant improvement in clinical practice and for the presence of 15 decision support system features whose importance had been repeatedly suggested in the literature.
Results Seventy studies were included. Decision support systems significantly improved clinical practice in 68% of trials. Univariate analyses revealed that, for five of the system features, interventions possessing the feature were significantly more likely to improve clinical practice than interventions lacking the feature. Multiple logistic regression analysis identified four features as independent predictors of improved clinical practice: automatic provision of decision support as part of clinician workflow (P < 0.00001), provision of recommendations rather than just assessments (P = 0.0187), provision of decision support at the time and location of decision making (P = 0.0263), and computer based decision support (P = 0.0294). Of 32 systems possessing all four features, 30 (94%) significantly improved clinical practice. Furthermore, direct experimental justification was found for providing periodic performance feedback, sharing recommendations with patients, and requesting documentation of reasons for not following recommendations.
Conclusions Several features were closely correlated with decision support systems' ability to improve patient care significantly. Clinicians and other stakeholders should implement clinical decision support systems that incorporate these features whenever feasible and appropriate.
Recent research has shown that health care delivered in industrialised nations often falls short of optimal, evidence based care. A nationwide audit assessing 439 quality indicators found that US adults receive only about half of recommended care,1 and the US Institute of Medicine has estimated that up to 98 000 US residents die each year as the result of preventable medical errors.2 Similarly a retrospective analysis at two London hospitals found that 11% of admitted patients experienced adverse events, of which 48% were judged to be preventable and of which 8% led to death.3
To address these deficiencies in care, healthcare organisations are increasingly turning to clinical decision support systems, which provide clinicians with patient-specific assessments or recommendations to aid clinical decision making.4 Examples include manual or computer based systems that attach care reminders to the charts of patients needing specific preventive care services and computerised physician order entry systems that provide patient-specific recommendations as part of the order entry process. Such systems have been shown to improve prescribing practices,5-7 reduce serious medication errors,8 9 enhance the delivery of preventive care services,10 11 and improve adherence to recommended care standards.4 12 Compared with other approaches to improve practice, these systems have also generally been shown to be more effective and more likely to result in lasting improvements in clinical practice.13-22
Clinical decision support systems do not always improve clinical practice, however. In a recent systematic review of computer based systems, most (66%) significantly improved clinical practice, but 34% did not.4 Relatively little sound scientific evidence is available to explain why systems succeed or fail.23 24 Although some investigators have tried to identify the system features most important for improving clinical practice,12 25-34 they have typically relied on the opinion of a limited number of experts, and none has combined a systematic literature search with quantitative meta-analysis. We therefore systematically reviewed the literature to identify the specific features of clinical decision support systems most crucial for improving clinical practice.
Methods
Data sources
We searched Medline (1966-2003), CINAHL (1982-2003), and the Cochrane Controlled Trials Register (2003) for relevant studies using combinations of the following search terms: decision support systems, clinical; decision making, computer-assisted; reminder systems; feedback; guideline adherence; medical informatics; communication; physician's practice patterns; reminder$; feedback$; decision support$; and expert system. We also systematically searched the reference lists of included studies and relevant reviews.
Inclusion and exclusion criteria
We defined a clinical decision support system as any electronic or non-electronic system designed to aid directly in clinical decision making, in which characteristics of individual patients are used to generate patient-specific assessments or recommendations that are then presented to clinicians for consideration.4 We included both electronic and non-electronic systems because we felt the use of a computer represented only one of many potentially important factors. Our inclusion criteria were any randomised controlled trial evaluating the ability of a clinical decision support system to improve an important clinical practice in a real clinical setting; use of the system by clinicians (physicians, physician assistants, or nurse practitioners) directly involved in patient care; and assessment of improvements in clinical practice through patient outcomes or process measures. Our exclusion criteria were less than seven units of randomisation per study arm; study not in English; mandatory compliance with decision support system; lack of description of decision support content or of clinician interaction with system; and score of less than five points on a 10 point scale assessing five potential sources of study bias.4
Study selection
Two authors independently reviewed the titles, index terms, and s of the identified references and rated each paper as "potentially relevant" or "not relevant" using a screening algorithm based on study type, study design, subjects, setting, and intervention. Two authors then independently reviewed the full texts of the selected articles and again rated each paper as "potentially relevant" or "not relevant" using the screening algorithm. Finally, two authors independently applied the full set of inclusion and exclusion criteria to the potentially relevant studies to select the final set of included studies. Disagreements between reviewers were resolved by discussion, and we measured inter-rater agreement using Cohen's unweighted statistic.35
Data ion
A study may include several trial arms, so that multiple comparisons may exist within the single study. For each relevant comparison, two reviewers independently assessed whether the clinical decision support system resulted in an improvement in clinical practice that was both statistically and clinically significant. In some cases changes in practice characterised as clinically significant by the study authors were deemed non-significant by the reviewers. We considered effect size as an alternative outcome measure but concluded that the use of effect size would have been misleading given the significant heterogeneity among the outcome measures reported by the included studies. We also anticipated that the use of effect size would have led to the exclusion of many relevant trials, as many studies fail to report all of the statistical elements necessary to accurately reconstruct effect sizes.
Next, two reviewers independently determined the presence or absence of specific features of decision support systems that could potentially explain why a system succeeded or failed. To construct a set of potential explanatory features, we systematically examined all relevant reviews and primary studies identified by our search strategy and recorded any factors suggested to be important for system effectiveness. Both technical and non-technical factors were eligible for inclusion. Also, if a factor was designated as a barrier to effectiveness (such as "the need for clinician data entry limits system effectiveness") we treated the logically opposite concept as a potential success factor (such as "removing the need for clinician data entry enhances system effectiveness"). Next, we limited our consideration to features that were identified as being potentially important by at least three sources, which left us with 22 potential explanatory features, including general system features, system-clinician interaction features, communication content features, and auxiliary features (tables 1 and 2). Of these 22 features, 15 could be included into our analysis (table 1) because their presence or absence could be reliably ed from most studies, whereas the remaining seven could not (table 2).
Table 1 Descriptions of the 15 features of clinical decision support systems (CDSS) included in statistical analyses
Table 2 The seven potential explanatory features of clinical decision support systems (CDSS) that could not be included in the statistical analyses
Data synthesis
We used three methods to identify clinical decision support system features important for improving clinical practice.
Univariate analyses—For each of the 15 selected features we individually determined whether interventions possessing the feature were significantly more likely to succeed (result in a statistically and clinically significant improvement in clinical practice) than interventions lacking the feature. We used StatXact55 to calculate 95% confidence intervals for individual success rates56 and for differences in success rates.57
Multiple logistic regression analyses—For these analyses, the presence or absence of a statistically and clinically significant improvement in clinical practice constituted the binary outcome variable, and the presence or absence of specific decision support system features constituted binary explanatory variables. We included only cases in which the clinical decision support system was compared against a true control group. For the primary meta-regression analysis, we pooled the results from all included studies, so as to maximise the power of the analysis while decreasing the risk of false positive findings from over-fitting of the model.58 We also conducted separate secondary regression analyses for computer based systems and for non-electronic systems. For all analyses, we included one indicator for the decision support subject matter (acute care v non-acute care) and two indicators for the study setting (academic v non-academic, outpatient v inpatient) to assess the role of potential confounding factors related to the study environment. With the 15 system features and the three environmental factors constituting the potential explanatory variables, we conducted logistic regression analyses using LogXact-5.59 Independent predictor variables were included into the final models using forward selection and a significance level of 0.05.
Direct experimental evidence—We systematically identified studies in which the effectiveness of a given decision support system was directly compared with the effectiveness of the same system with additional features. We considered a feature to have direct experimental evidence supporting its importance if its addition resulted in a statistically and clinically significant improvement in clinical practice.
Results
Description of studies
Of 10 688 potentially relevant articles screened, 88 papers describing 70 studies met all our inclusion and exclusion criteria (figure).w1-w88 Inter-rater agreements for study selection and data ion were satisfactory (table 3). The 70 studies contained 82 relevant comparisons, of which 71 compared a clinical decision support system with a control group (control-system comparisons) and 11 directly compared a system with the same system plus extra features (system-system comparisons). We used the control-system comparisons to identify system features statistically associated with successful outcomes and the system-system comparisons to identify features with direct experimental evidence of their importance.
Selection process of trials of clinical decision support systems (CDSS) for review
Table 3 Inter-rater agreement for study selection and data ion in review of trials of clinical decision support systems (CDSS)
Table 4 describes the characteristics of the 70 included studies. Between them, about 6000 clinicians acted as study subjects while caring for about 130 000 patients. The commonest types of decision support system were computer based systems that provided patient-specific advice on printed encounter forms or on printouts attached to charts (34%),w2 w4 w6 w8-w10 w12 w14 w19 w22-w28 w32 w34-w49 non-electronic systems that attached patient-specific advice to appropriate charts (26%),w1 w29-w31 w50-w66 and systems that provided decision support within computerised physician order entry systems (16%).w3 w7 w11 w15-w17 w20 w67-w71
Table 4 Characteristics of the 70 studies of clinical decision support systems (CDSS) included in review
Univariate analyses of clinical decision support system features
Table 5 summarises the success rates of clinical decision support systems with and without the 15 potentially important features. Overall, 48 of the 71 decision support systems (68% (95% confidence interval 56% to 78%)) significantly improved clinical practice. For five of the 15 features, the success rate of interventions possessing the feature was significantly greater than that of interventions lacking the feature.
Table 5 Success rates* of clinical decision support systems (CDSS) with and without 15 potentially important features. Results of 71 control-CDSS comparisons
Most notably, 75% of interventions succeeded when the decision support was provided to clinicians automatically, whereas none succeeded when clinicians were required to seek out the advice of the decision support system (rate difference 75% (37% to 84%)). Similarly, systems that were provided as an integrated component of charting or order entry systems were significantly more likely to succeed than stand alone systems (rate difference 37% (6% to 61%)); systems that used a computer to generate the decision support were significantly more effective than systems that relied on manual processes (rate difference 26% (2% to 49%)); systems that prompted clinicians to record a reason when not following the advised course of action were significantly more likely to succeed than systems that allowed the system advice to be bypassed without recording a reason (rate difference 41% (19% to 54%)); and systems that provided a recommendation (such as "Patient is at high risk of coronary artery disease; recommend initiation of blocker therapy") were significantly more likely to succeed than systems that provided only an assessment of the patient (such as "Patient is at high risk of coronary artery disease") (rate difference 35% (8% to 58%)).
Finally, systems that provided decision support at the time and location of decision making were substantially more likely to succeed than systems that did not provide advice at the point of care, but the difference in success rates fell just short of being significant at the 0.05 level (rate difference 48% (-0.46% to 70.01%)).
Meta-regression analysis
The univariate analyses evaluated each potential success factor in isolation from the other factors. We therefore conducted multivariate logistic regression analyses in order to identify independent predictors of clinical decision support system effectiveness while taking into consideration the presence of other potentially important factors. Table 6 shows the results of these analyses.
Table 6 Features of clinical decision support systems (CDSS) associated with improved clinical practice. Results of meta-regression analyses of 71 control-CDSS comparisons
Of the six features shown to be important by the univariate analyses, four were identified as independent predictors of system effectiveness by the primary meta-regression analysis. Most notably, this analysis confirmed the critical importance of automatically providing decision support as part of clinician workflow (P < 0.00001). The other three features were providing decision support at the time and location of decision making (P = 0.0263), providing a recommendation rather than just an assessment (P = 0.0187), and using a computer to generate the decision support (P = 0.0294). Among the 32 clinical decision support systems incorporating all four features,w2-w6 w8-w10 w12 w16 w19 w20 w22 w24-w27 w32 w34-w49 w67 w69 w70 w88 30 (94% (80% to 99%)) significantly improved clinical practice. In contrast, clinical decision support systems lacking any of the four features improved clinical practice in only 18 out of 39 cases (46% (30% to 62%)). The subset analyses for computer based clinical decision support systems and for non-electronic clinical decision support systems yielded results consistent with the findings of the primary regression analysis (table 6).
Survey of direct experimental evidence
We identified 11 randomised controlled trials in which a clinical decision support system was evaluated directly against the same clinical decision support system with additional features (table 7).w14 w17 w19 w21 w22 w24-w26 w28 w38 w64 w86 In support of the regression results, one study found that system effectiveness was significantly enhanced when the decision support was provided at the time and location of decision making.w19 Similarly, effectiveness was enhanced when clinicians were required to document the reason for not following system recommendationsw14 and when clinicians were provided with periodic feedback about their compliance with system recommendations.w28 Furthermore, two of four studies found a significant beneficial effect when decision support results were provided to both clinicians and patients.w24-w26 w38 w86 In contrast, clinical decision support system effectiveness remained largely unchanged when critiques were worded more strongly and the evidence supporting the critiques was expanded to include institution-specific data,w17 when recommendations were made more specific,w21 when local clinicians were recruited into the system development process,w64 and when bibliographic citations were provided to support the recommendations made by the system.w22
Table 7 Details of 11 randomised controlled trials of clinical decision support systems (CDSS) that directly evaluated effectiveness of specific CDSS features
Discussion
w1-w88, the studies reviewed in this article, are on bmj.com
We thank Vic Hasselblad for his assistance with the statistical analyses.
Contributors and guarantor: KK, DFL, and EAB contributed to the study design. KK, CAH, and DFL contributed to the data ion. All authors contributed to the data analysis. KK managed the project and wrote the manuscript, and all authors contributed to the critical revision and final approval of the manuscript. DFL is guarantor.
Funding: This study was supported by research grants T32-GM07171 and F37-LM008161-01 from the National Institutes of Health, Bethesda, Maryland, USA; and by research grants R01-HS10472 and R03-HS10814 from the Agency for Healthcare Research and Quality, Rockville, Maryland, USA. These funders did not play a role in the design, execution, analysis, or publication of this study.
Competing interests: None declared.
Ethical approval: Not required.
References
McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003;348: 2635-45.
Kohn LT, Corrigan JM, Donaldson MS, eds. To err is human: building a safer health system. Washington, DC: National Academy Press, 1999.
Vincent C, Neale G, Woloshynowych M. Adverse events in British hospitals: preliminary retrospective record review. BMJ 2001;322: 517-9.
Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA 1998;280: 1339-46.
Bennett JW, Glasziou PP. Computerised reminders and feedback in medication management: a systematic review of randomised controlled trials. Med J Aust 2003;178: 217-22.
Walton RT, Harvey E, Dovey S, Freemantle N. Computerised advice on drug dosage to improve prescribing practice. Cochrane Database Syst Rev 2001;1: CD002894.
Walton R, Dovey S, Harvey E, Freemantle N. Computer support for determining drug dose: systematic review and meta-analysis. BMJ 1999;318: 984-90.
Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 2003;163: 1409-16.
Bates DW, Teich JM, Lee J, Seger D, Kuperman GJ, Ma'Luf N, et al. The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc 1999;6: 313-21.
Shea S, DuMouchel W, Bahamonde L. A meta-analysis of 16 randomized controlled trials to evaluate computer-based clinical reminder systems for preventive care in the ambulatory setting. J Am Med Inform Assoc 1996;3: 399-409.
Balas EA, Weingarten S, Barb CT, Blumenthal D, Boren SA, Brown GD. Improving preventive care by prompting physicians. Arch Intern Med 2000;160: 301-8.
Shiffman RN, Liaw Y, Brandt CA, Corb GJ. Computer-based guideline implementation systems: a systematic review of functionality and effectiveness. J Am Med Inform Assoc 1999;6: 104-14.
Thomson O'Brien MA, Oxman AD, Davis DA, Haynes RB, Freemantle N, Harvey EL. Audit and feedback versus alternative strategies: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2000;2: CD000260.
Hulscher ME, Wensing M, van der Weijden T, Grol R. Interventions to implement prevention in primary care. Cochrane Database Syst Rev 2001;1: CD000362.
Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. CMAJ 1995;153: 1423-31.
Kupets R, Covens A. Strategies for the implementation of cervical and breast cancer screening of women by primary care physicians. Gynecol Oncol 2001;83: 186-97.
Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD, Thomson MA. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. The Cochrane Effective Practice and Organization of Care Review Group. BMJ 1998;317: 465-8.
Mandelblatt J, Kanetsky PA. Effectiveness of interventions to enhance physician screening for breast cancer. J Fam Pract 1995;40: 162-71.
Wensing M, Grol R. Single and combined strategies for implementing changes in primary care: a literature review. Int J Qual Health Care 1994;6: 115-32.
Mandelblatt JS, Yabroff KR. Effectiveness of interventions designed to increase mammography use: a meta-analysis of provider-targeted strategies. Cancer Epidemiol Biomarkers Prev 1999;8: 759-67.
Stone EG, Morton SC, Hulscher ME, Maglione MA, Roth EA, Grimshaw JM, et al. Interventions that increase use of adult immunization and cancer screening services: a meta-analysis. Ann Intern Med 2002;136: 641-51.
Weingarten SR, Henning JM, Badamgarav E, Knight K, Hasselblad V, Gano A Jr, et al. Interventions used in disease management programmes for patients with chronic illness—which ones work? Meta-analysis of published reports. BMJ 2002;325: 925-32.
Kaplan B. Evaluating informatics applications—some alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inf 2001;64: 39-56.
Kanouse DE, Kallich JD, Kahan JP. Dissemination of effectiveness and outcomes research. Health Policy 1995;34: 167-92.
Wendt T, Knaup-Gregori P, Winter A. Decision support in medicine: a survey of problems of user acceptance. Stud Health Technol Inform 2000;77: 852-6.
Wetter T. Lessons learnt from bringing knowledge-based decision support into routine use. Artif Intell Med 2002;24: 195-203.
Sim I, Gorman P, Greenes RA, Haynes RB, Kaplan B, Lehmann H, et al. Clinical decision support systems for the practice of evidence-based medicine. J Am Med Inform Assoc 2001;8: 527-34.
Payne TH. Computer decision support systems. Chest 2000;118: 47-52S.
Shiffman RN, Brandt CA, Liaw Y, Corb GJ. A design model for computer-based guideline implementation based on information management services. J Am Med Inform Assoc 1999;6: 99-103.
Ash JS, Stavri PZ, Kuperman GJ. A consensus statement on considerations for a successful CPOE implementation. J Am Med Inform Assoc 2003;10: 229-34.
Trivedi MH, Kern JK, Marcee A, Grannemann B, Kleiber B, Bettinger T, et al. Development and implementation of computerized clinical guidelines: barriers and solutions. Methods Inf Med 2002;41: 435-42.
Solberg LI, Brekke ML, Fazio CJ, Fowles J, Jacobsen DN, Kottke TE, et al. Lessons from experienced guideline implementers: attend to many factors and use multiple strategies. Jt Comm J Qual Improv 2000;26: 171-88.
Bates DW, Kuperman GJ, Wang S, Gandhi T, Kittler A, Volk L, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 2003;10: 523-30.
Centre for Health Informatics, University of New South Wales. Appendix A: electronic decision support activities in different healthcare settings in Australia. In: National Electronic Decision Support Taskforce. Electronic decision support for Australia's health sector. Canberra: Commonwealth of Australia, 2003. www.ahic.org.au/downloads/nedsrept.pdf (accessed 28 Jan 2005).
Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas 1960;20: 37-46.
Aronsky D, Chan KJ, Haug PJ. Evaluation of a computerized diagnostic decision support system for patients with pneumonia: study design considerations. J Am Med Inform Assoc 2001;8: 473-85.
Ramnarayan P, Britto J. Paediatric clinical decision support systems. Arch Dis Child 2002;87: 361-2.
Ryff-de Leche A, Engler H, Nutzi E, Berger M, Berger W. Clinical application of two computerized diabetes management systems: comparison with the log-book method. Diabetes Res 1992;19: 97-105.
Hersh WR. Medical informatics: improving health care through information. JAMA 2002;288: 1955-8.
Bodenheimer T, Grumbach K. Electronic technology: a spark to revitalize primary care? JAMA 2003;290: 259-64.
Lowensteyn I, Joseph L, Levinton C, Abrahamowicz M, Steinert Y, Grover S. Can computerized risk profiles help patients improve their coronary risk? The results of the coronary health assessment study (CHAS). Prev Med 1998;27: 730-7.
Miller RA. Medical diagnostic decision support systems—past, present, and future: a threaded bibliography and brief commentary. J Am Med Inform Assoc 1994;1: 8-27.
Morris AH. Academia and clinic. Developing and implementing computerized protocols for standardization of clinical decisions. Ann Intern Med 2000;132: 373-83.
Tierney WM. Improving clinical decisions and outcomes with information: a review. Int J Med Inf 2001;62: 1-9.
Heathfield HA, Wyatt J. Philosophies for the design and development of clinical decision-support systems. Methods Inf Med 1993;32: 1-8.
Wyatt JR. Lessons learnt from the field trial of ACORN, an expert system to advise on chest pain. Proceedings of the Sixth World Conference on Medical Informatics, Singapore 1989: 111-5.
Stock JL, Waud CE, Coderre JA, Overdorf JH, Janikas JS, Heiniluoma KM, et al. Clinical reporting to primary care physicians leads to increased use and understanding of bone densitometry and affects the management of osteoporosis. A randomized trial. Ann Intern Med 1998;128: 996-9.
Frances CD, Alperin P, Adler JS, Grady D. Does a fixed physician reminder system improve the care of patients with coronary artery disease? A randomized controlled trial. West J Med 2001;175: 165-6.
Belcher DW, Berg AO, Inui TS. Practical approaches to providing better preventive care: are physicians a problem or a solution? Am J Prev Med 1988;4: 27-48.
McPhee SJ, Detmer WM. Office-based interventions to improve delivery of cancer prevention services by primary care physicians. Cancer 1993;72: 1100-12.
Strecher VJ, O'Malley MS, Villagra VG, Campbell EE, Gonzalez JJ, Irons TG, et al. Can residents be trained to counsel patients about quitting smoking? Results from a randomized trial. J Gen Intern Med 1991;6: 9-17.
Shannon KC, Sinacore JM, Bennett SG, Joshi AM, Sherin KM, Deitrich A. Improving delivery of preventive health care with the comprehensive annotated reminder tool (CART). J Fam Pract 2001;50: 767-71.
Delaney BC, Fitzmaurice DA, Riaz A, Hobbs FD. Can computerised decision support systems deliver improved quality in primary care? BMJ 1999;319: 1281-3.
Weir CJ, Lees KR, MacWalter RS, Muir KW, Wallesch CW, McLelland EV, et al. Cluster-randomized, controlled trial of computer-based decision support for selecting long-term anti-thrombotic therapy after acute ischaemic stroke. QJM 2003;96: 143-53.
StatXact . Version 6.2.0. Cambridge, MA: Cytel Software, 2004.
Casella G. Refining binomial confidence intervals. CanJStat 1986;14: 113-29.
Agresti A, Min Y. On small-sample confidence intervals for parameters in discrete distributions. Biometrics 2001;57: 963-71.
Green SB. How many subjects does it take to do a regression analysis? Multivariate Behav Res 1991;26: 499-510.
LogXact . Version 5.0. Cambridge, MA: Cytel Software, 2002.
Harrell FE Jr, Lee KL, Mark DB. Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat Med 1996;15: 361-87.
Grol R. Personal paper: beliefs and evidence in changing clinical practice. BMJ 1997;315: 418-21.
Freemantle N, Grilli R, Grimshaw J, Oxman A. Implementing findings of medical research: the Cochrane Collaboration on Effective Professional Practice. Qual Health Care 1995;4: 45-7.
((Kensaku Kawamoto, fellow1, Caitlin A Hou)