Bad reporting does not mean bad methods for randomised trials: observational study of randomised controlled trials performed by the Radiatio
http://www.100md.com
《英国医生杂志》
1 Department of Interdisciplinary Oncology, H Lee Moffitt Cancer Center and Research Institute, University of South Florida, 12902 Magnolia Drive, Tampa, FL 33612, USA, 2 H Lee Moffitt Cancer Center and Research Institute, 3 UK Cochrane Centre, Oxford OX2 7LG, 4 Statistical Unit, Radiation Therapy Oncology Group, PA 19107, USA
Correspondence to: B Djulbegovic djulbebm@moffitt.usf.edu
Abstract
Evaluation of the quality of published clinical research is central to informed decision making. Information on trial quality is particularly important during peer review or when results from individual studies are evaluated in systematic reviews or meta-analyses.1 2 The quality of research should always be considered when a report is used in decision making in health care. Poorly conducted and reported research seriously compromises the integrity of the research process, especially if biased results receive false credibility.3
Many efforts have been made to improve the quality of studies and their related publications. The best example was the publication of the Consolidated Standards of Reporting of Trials (CONSORT) statement to improve the quality of trial reports.3 Such efforts to improve the quality of clinical research, however, imply that if certain design or methodological features are not reported then they were not done. Ideally, assessment of the quality of clinical research should not only address reporting but also the original design and intended plan for its conduct and analysis as specified in the trial's research protocol. The importance of linking the final report of clinical trials with their original research protocols led some authors to argue that no randomised controlled trial should be conducted without publication of its research protocol.4 The reasons behind this are that critical comments may be encouraged leading to improvements in trial design, publication can be coupled with trial registration, the original protocol can be compared with what was subsequently done, and investigators can more easily appreciate what research is being conducted in their areas of interest.4 More importantly, publication of research protocols is one of the best ways to minimise bias by explicitly stating a priori hypotheses and methods without the prior knowledge of results.5 Many randomised controlled trials are preceded by the preparation of a written protocol, which describes the conduct of the trial more comprehensively than is possible in many journal articles, and making these protocols available would provide much useful additional information. We aimed to test the assumption that poor reporting reflects poor methods by comparing research protocols with the information published in the final reports of a set of randomised controlled trials.
Methods
Overall, there were 59 terminated phase III randomised controlled trials, three of which had not been published. We found 58 published papers for the remaining 56 protocols for use in our study. The figure summarises the results according to information from the papers, protocols, and the Radiation Therapy Oncology Group's statistical office. This shows that the reporting of methods in the publications does not necessarily reflect the methodological quality of the associated protocols. For example, a priori sample size calculations were performed in 44 (76%) trials, but this information was given in only nine of the 58 published papers (16%). Although all trials had adequate allocation concealment (through central randomisation), this was reported in only 24 (41%) of the papers. From our initial data extraction, we found that 40 (69%) of these trials used an intention to treat analysis. This number was increased to 48 (83%) after verification by the Radiation Therapy Oncology Group. End points were clearly defined, and and errors were prespecified in 44 (76%) and 43 (74%) trials, respectively, but only reported in six (10%) of the papers. Interestingly, reporting of drop outs was meticulous; we found no difference in frequency (91%) between data presented in the papers and those in the original files.
Quality of reporting compared with actual methodological quality of 56 randomised controlled trials (58 reports) conducted by the Radiation Therapy Oncology Group, based on information from published reports, protocols, and verification by the group. Absolute numbers of reports are shown
Discussion
Moher D, Pham B, Jones A, Cook DJ, Jadad AR, Moher M, et al. Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet 1998;352: 609-13.
Huwiler-Muntener K, Juni P, Junker C, Egger M. Quality of reporting of randomized trials as a measure of methodologic quality. JAMA 2002;287: 2801-4.
Moher D, Schulz KF, Altman D. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA 2001;285: 1987-91.
Godlee F. Publishing study protocols: making them visible will improve registration, reporting and recruitment. BMC News Views 2001;2: 4.
Silagy CA, Middleton P, Hopewell S. Publishing protocols of systematic reviews: comparing what was done to what was planned. JAMA 2002;287: 2831-4.
RTOG procedure manual. Philadelphia: Radiation Therapy Oncology Group Headquarters Office, 2003.
Egger M, Davey Smith G, Altman D. Systematic reviews in health care. Meta-analysis in context, 2nd ed. London: BMJ Publishing, 2001.
Clarke M, Oxman A, eds. Cochrane reviewers' handbook 4.2.0 . In Cochrane Library, issue 2. Oxford: Update Software, 2003.
Liberati A, Himel HN, Chalmers TC. A quality assessment of randomized control trials of primary treatment of breast cancer. J Clin Oncol 1986;4: 942-51.
Herson J. Patient registration in a cooperative oncology group. Control Clin Trials 1980;1: 101-10.
Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995;273: 408-12.
Moher D, Jones A, Lepage L. Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA 2001;285: 1992-5.
(Accepted 10 October 2003)
Related Articles
Are experimental treatments for cancer in children superior to established treatments? Observational study of randomised controlled trials by the Children's Oncology Group
Ambuj Kumar, Heloisa Soares, Robert Wells, Mike Clarke, Iztok Hozo, Archie Bleyer, Gregory Reaman, Iain Chalmers, and Benjamin Djulbegovic
BMJ 2005 331: 1295.
Comparison of descriptions of allocation concealment in trial protocols and the published reports: cohort study
Julie Pildal, An-Wen Chan, Asbj?rn Hróbjartsson, Elisabeth Forfang, Douglas G Altman, and Peter C G?tzsche
BMJ 2005 330: 1049.
Quality of randomised controlled trials: Quality of trial methods is not good in all disciplines
Helen H G Handoll
BMJ 2004 328: 286.
Poor reporting of trials does not mean poor quality trials
BMJ 2004 328: 0.(Heloisa P Soares, researc)
Correspondence to: B Djulbegovic djulbebm@moffitt.usf.edu
Abstract
Evaluation of the quality of published clinical research is central to informed decision making. Information on trial quality is particularly important during peer review or when results from individual studies are evaluated in systematic reviews or meta-analyses.1 2 The quality of research should always be considered when a report is used in decision making in health care. Poorly conducted and reported research seriously compromises the integrity of the research process, especially if biased results receive false credibility.3
Many efforts have been made to improve the quality of studies and their related publications. The best example was the publication of the Consolidated Standards of Reporting of Trials (CONSORT) statement to improve the quality of trial reports.3 Such efforts to improve the quality of clinical research, however, imply that if certain design or methodological features are not reported then they were not done. Ideally, assessment of the quality of clinical research should not only address reporting but also the original design and intended plan for its conduct and analysis as specified in the trial's research protocol. The importance of linking the final report of clinical trials with their original research protocols led some authors to argue that no randomised controlled trial should be conducted without publication of its research protocol.4 The reasons behind this are that critical comments may be encouraged leading to improvements in trial design, publication can be coupled with trial registration, the original protocol can be compared with what was subsequently done, and investigators can more easily appreciate what research is being conducted in their areas of interest.4 More importantly, publication of research protocols is one of the best ways to minimise bias by explicitly stating a priori hypotheses and methods without the prior knowledge of results.5 Many randomised controlled trials are preceded by the preparation of a written protocol, which describes the conduct of the trial more comprehensively than is possible in many journal articles, and making these protocols available would provide much useful additional information. We aimed to test the assumption that poor reporting reflects poor methods by comparing research protocols with the information published in the final reports of a set of randomised controlled trials.
Methods
Overall, there were 59 terminated phase III randomised controlled trials, three of which had not been published. We found 58 published papers for the remaining 56 protocols for use in our study. The figure summarises the results according to information from the papers, protocols, and the Radiation Therapy Oncology Group's statistical office. This shows that the reporting of methods in the publications does not necessarily reflect the methodological quality of the associated protocols. For example, a priori sample size calculations were performed in 44 (76%) trials, but this information was given in only nine of the 58 published papers (16%). Although all trials had adequate allocation concealment (through central randomisation), this was reported in only 24 (41%) of the papers. From our initial data extraction, we found that 40 (69%) of these trials used an intention to treat analysis. This number was increased to 48 (83%) after verification by the Radiation Therapy Oncology Group. End points were clearly defined, and and errors were prespecified in 44 (76%) and 43 (74%) trials, respectively, but only reported in six (10%) of the papers. Interestingly, reporting of drop outs was meticulous; we found no difference in frequency (91%) between data presented in the papers and those in the original files.
Quality of reporting compared with actual methodological quality of 56 randomised controlled trials (58 reports) conducted by the Radiation Therapy Oncology Group, based on information from published reports, protocols, and verification by the group. Absolute numbers of reports are shown
Discussion
Moher D, Pham B, Jones A, Cook DJ, Jadad AR, Moher M, et al. Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet 1998;352: 609-13.
Huwiler-Muntener K, Juni P, Junker C, Egger M. Quality of reporting of randomized trials as a measure of methodologic quality. JAMA 2002;287: 2801-4.
Moher D, Schulz KF, Altman D. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA 2001;285: 1987-91.
Godlee F. Publishing study protocols: making them visible will improve registration, reporting and recruitment. BMC News Views 2001;2: 4.
Silagy CA, Middleton P, Hopewell S. Publishing protocols of systematic reviews: comparing what was done to what was planned. JAMA 2002;287: 2831-4.
RTOG procedure manual. Philadelphia: Radiation Therapy Oncology Group Headquarters Office, 2003.
Egger M, Davey Smith G, Altman D. Systematic reviews in health care. Meta-analysis in context, 2nd ed. London: BMJ Publishing, 2001.
Clarke M, Oxman A, eds. Cochrane reviewers' handbook 4.2.0 . In Cochrane Library, issue 2. Oxford: Update Software, 2003.
Liberati A, Himel HN, Chalmers TC. A quality assessment of randomized control trials of primary treatment of breast cancer. J Clin Oncol 1986;4: 942-51.
Herson J. Patient registration in a cooperative oncology group. Control Clin Trials 1980;1: 101-10.
Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995;273: 408-12.
Moher D, Jones A, Lepage L. Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA 2001;285: 1992-5.
(Accepted 10 October 2003)
Related Articles
Are experimental treatments for cancer in children superior to established treatments? Observational study of randomised controlled trials by the Children's Oncology Group
Ambuj Kumar, Heloisa Soares, Robert Wells, Mike Clarke, Iztok Hozo, Archie Bleyer, Gregory Reaman, Iain Chalmers, and Benjamin Djulbegovic
BMJ 2005 331: 1295.
Comparison of descriptions of allocation concealment in trial protocols and the published reports: cohort study
Julie Pildal, An-Wen Chan, Asbj?rn Hróbjartsson, Elisabeth Forfang, Douglas G Altman, and Peter C G?tzsche
BMJ 2005 330: 1049.
Quality of randomised controlled trials: Quality of trial methods is not good in all disciplines
Helen H G Handoll
BMJ 2004 328: 286.
Poor reporting of trials does not mean poor quality trials
BMJ 2004 328: 0.(Heloisa P Soares, researc)