At the frontier of biomedical publication: Chicago 2005
http://www.100md.com
《英国医生杂志》
Last month the fifth congress on peer review and biomedical publication was held in Chicago. The presentations highlighted that we still have plenty of room to improve the quality of published research
Evidence started to matter in biomedical publishing soon after it came to matter in medicine—relatively recently. The first international congress on peer review and biomedical publication was held in Chicago in 1989. At the time of the third congress, in 1997, only 146 original scientific articles had been published on peer review, of which 22 were prospective studies and 11 randomised controlled trials.1 Since then, the body of evidence has been growing, with about 200 s indexed in Medline a year.2 We now have plenty of evidence to support the contention that peer review is "expensive, slow, subjective and biased, open to abuse, patchy at detecting important methodological defects, and almost useless at detecting fraud or misconduct."3 The evidence on how to improve the process is scarce. What did the fifth congress add?
Industry funding
Some of the presented research looked into what happens when the pharmaceutical industry sponsors meta-analyses—the top of the hierarchy of evidence. Yank and colleagues analysed the agreement between results and conclusions in 71 meta-analyses of antihypertensive drugs published between 1966 and 2002.4 In about a third, authors disclosed financial ties with the pharmaceutical industry. Meta-analyses sponsored by industry were five times more likely than those funded by other sources to report conclusions favouring the study drug when such conclusions were not supported by the results. Meta-analyses funded by academic institutions showed no disagreement between the results and conclusions. Richard Smith, former editor of the BMJ, said: "It's a marvellous study and very disturbing." This indicates an embarrassing editorial failure, commented Yank. But she refused to be drawn on the identity of the worst offending journals.
Another study compared quality and conclusions in pairs of meta-analyses of the same drugs for treating the same disease, one Cochrane systematic review and the other sponsored by the manufacturing drug company.5 Despite the limitations—only eight pairs of meta-analyses met the inclusion criteria and the study wasn't blinded—the results were compelling. None of the Cochrane reviews and all of the industry sponsored meta-analyses concluded without reservation that the study drug was better than the comparison treatment. "Patients—to the barricades," said Peter G?tzche towards the end of his presentation.
Gardner and Lidz provided further evidence of the pharmaceutical industry skewing the published literature. In their questionnaire study of 322 authors who published drug trials between 1998 and 2001, almost half stated that at some point in their careers they had participated in a trial that was never published.6 The sponsoring pharmaceutical company directly prevented publication of one in five of these trials, and trials that showed no difference were twice as likely to be unpublished as trials with positive results.
Peer review
Who then should assist editors in their decisions about what to publish? Many journals ask submitting authors to suggest or exclude potential reviewers, but is this a good thing? Two studies showed that although reviewers suggested by authors produce reviews comparable in quality and timeliness to those of reviewers suggested by editors, they are significantly more likely to recommend publication.7 8
Other factors that can increase authors' chances of publication are excluding a reviewer9 and citing a reviewer's previously published work.10 Egger and colleagues found a clear trend of reviewer's recommendations for publication becoming more favourable as their own publications were cited in the manuscript. The message for authors is clear: see whose work you have cited most in your paper, recommend those people as reviewers, ask the journal to exclude your known enemies, and hope for the best.
The process of blinding reviewers to authors' identities continues to give hope to some. In a before and after study, Ross and colleagues studied the effect of removing authors' names and affiliations from s being peer reviewed for inclusion in the annual scientific sessions of the American Heart Association.11 Before 2002, when reviewers were aware of author's identities, 41% of accepted s originated from US institutions. The proportion fell to 33% after blinding was introduced, and acceptance rates for s from non-US institutions and non-English speaking countries rose significantly. The authors called for the universal adoption of blind peer reviewing of s submitted for scientific meetings. Such an intervention, however, is unlikely to help journals and grant committees as larger pieces of work become highly recognisable—previous trials have shown that up to 46% of reviewers successfully identify authors of research articles.12
How much these studies add to improving the quality of peer review remains unclear. Drummond Rennie, the guiding force behind the congresses and deputy editor of JAMA, repeated his call for more qualitative research into the cognitive processes involved in peer review.2 Such research was in short supply in Chicago. One study analysed editorial meetings at JAMA over two months and reported that editors' highest priority was the scientific merit of papers, followed closely by meeting the needs of readers and timeliness.13 Quality of writing and significance of results were discussed less often. The other qualitative study examined reviewers' attitudes and values and reported that many reviewers challenged conventional beliefs about the purpose and process of peer review.14
Impact factor
Perhaps the aspect of biomedical publication most directly challenged at the congress was the journal impact factor. Eugene Garfield, the father of the impact factor, confirmed that this measure has been misinter-preted and abused almost since he first started using it in 1963 to select journals for inclusion in the Institute of Scientific Information's Science Citation Index. It is calculated each year by dividing the number of citations to all articles published in the journal by the number of citable items. The impact factor can be manipulated by journal self citations and by fiddling with the number of citable articles. Garfield explained other measures that can help expose the imperfections of the impact factor. Yet despite these imperfections and being a poor measure of individual scientist's performance,15 impact factor is still important for academic advancement and journal prestige.
How realistically does the impact factor reflect journals' true impact? Chew and colleagues from the Medical Journal of Australia showed that, over the past 11 years, all the main general medical journals saw their impact factors rise.16 However, for Annals of Internal Medicine, JAMA, and the Lancet, the rise was largely due to falls in denominators—that is, the number of citable items—whereas at the other journals (BMJ, CMAJ, Medical Journal of Australia, and the New England Journal of Medicine) the rise in impact factor was mostly due to increases in the number of citations. Chew and colleagues also interviewed the editors of these journals; two admitted to having a deliberate policy to publish fewer articles in order to increase the impact factor. All the editors who were interviewed agreed that the impact factor is a "mixed blessing—attractive to researchers but not the best measure of clinical impact."
Smaller journals may have to adopt other strategies to raise their impact factor. A study by Sahu and colleagues suggested that open access might be a powerful means for small journals to increase their visibility, citations, and consequently impact factor.17 Citations of articles published in the Journal of Postgraduate Medicine between 1990 and 1999 rose significantly after the journal went open access in 2001. Half of the articles were first cited only after open access was introduced.
Research into authorship again adds to the grim bits of the picture that the evidence is outlining. Almost 70% of corresponding authors of papers published in the Croatian Medical Journal declare their contributions differently if asked about them twice.18 Furthermore, up to 60% of authors don't fulfil the International Committee of Medical Journal Editors' criteria for authorship after declaring their contributions in a categorical or open ended form. However, if given a more leading type of form, authors will do better at stating their contributions and less than 20% will not fulfil the criteria.
Solutions for a better future
The most tangible take home messages for editors came from research that focused on improving the presentation of studies and journals' advice to contributors. Editors were urged to introduce mini-CONSORT for reporting randomised controlled trials in s,19 guidelines for reporting crossover trials,20 and a uniform system for grading the published evidence.21 They were also asked to raise the quality of reporting relative risks and odds ratios in s,22 improve statistical and methodological content of the advice to contributors,23 24 and rigorously implement existing policies.25-27
Summary points
The body of evidence on peer review is steadily growing
The drug industry is successfully skewing the literature in its favour
It pays to recommend or exclude reviewers when submitting a paper for publication
Many journals fiddle with their impact factors
Honorary authors remain prevalent
Apart from improving the quality of published literature, better reporting should speed up the advent of trial banks—open access electronic knowledge bases that can capture in detail aspects of trial design, execution, and results in a form that computers can understand.28 Decision support systems can then use these data more selectively, providing clinician friendly computer assistance for critical appraisal and evidence based practice. Sim reported that trialists found it easier to enter their data into the trial bank than to write a traditional research paper, and that readers found it easier to extract information about the trial—surely a sign that the days of journals reporting trials are numbered.
Sim went on to describe a compelling vision of the future in which electronic patient records are dragged and dropped into databases and automatically open up boxes with relevant evidence for managing that particular patient. It may not be long before such powerful systems are on doctors' desktops. Professional and moral responsibility to ensure that those boxes contain reliable information that will most benefit patients lies with us all.
Competing interests: None declared.
References
Overbeke J. The state of evidence: what we know and what we don't know about journal peer review. In: Godlee F, Jefferson T, eds. Peer review in health sciences. London: BMJ Books, 1999: 32-44.
Rennie D. Fourth international congress on peer review in biomedical publication. JAMA 2002;287: 2759-60.
Godlee F, Jefferson T. Introduction. In: Godlee F, Jefferson T, eds. Peer review in health sciences. London: BMJ Books, 1999: xi-v.
Yank V, Rennie D, Bero L. Are authors' financial ties with pharmaceutical companies associated with positive results or conclusions in metaanalyses on antihypertensive medications ? Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
J?rgensen AW, G?tzche P. Sponsorship, bias, and methodology: Cochrane reviews compared with industry-sponsored meta-analyses of the same drugs . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Gardner W, Lidz CW. Failures to publish pharmaceutical clinical trials . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Schroter S, Tite L, Hutchings A, Black N. Comparison of author and editor suggested reviewers in terms of review quality, timeliness, and recommendations for publication . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Wager E, Parkin EC, Tamber PS. Are reviewers suggested by authors as good as those chosen by editors? Results of a rater-blinded, retrospective study . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Goldsmith LA, Blalock E, Bobkova H, Hall RP. Effect of authors' suggestions concerning reviewers on manuscript acceptance . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Egger M, Wood L, von Elm E, Wood A, Shlomo YB, May M. Are reviewers influenced by citations of their own work? Evidence from the International Journal of Epidemiology . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Ross JS, Gross CP, Hong Y, Grant AO, Daniels SR, Hachinski VC et al. Assessment of blind peer review on s acceptance for scientific meetings . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Fisher M, Friedman SB, Strauss B. The effects of blinding on acceptance of research papers by peer review. JAMA 1994;272: 143-6.
Dickersin K, Mansell C. Rethinking publication bias: developing a scheme for classifying editorial discussion . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Callaham M, Tercier J. Qualitative profile of journal peer reviewers and predictors of peer reviewer quality . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Smith R. Unscientific practice flourishes in science. BMJ 1998;316: 1036-40.
Chew M, Van Der Weyden M, Villanueva EV. More than a decade in the life of the impact factor . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Sahu DKR, Gogtay NJ, Bavdekar SB. Effect of open access on citation rates for a small biomedical journal . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Marusic A, Bates T, Anic A, Ilakovac V, Marusic M. In the eye of the beholder: contribution disclosure practices and inappropriate authorship . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Hopewell S, Clarke M. Trials reported in s: the need for a mini-CONSORT . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Mills EJ, Chan A, Wu P, Guyatt GH, Altman DG. Design, analysis, and presentation of crossover trials . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Scott JR, Rinehart R, Spong CY. Grading the evidence of published papers for the benefit of clinicians . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
G?tzche P. Are relative risks and odds ratios in s believable ? Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Altman DG, Schriger DL. The statistical and methodological content of journals' instructions for authors . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Schriger DL, Arora S, Schringer VA, Altman DG. An analysis of the content of medical journals' instructions for authors . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Siegel PZ, Thacker SB, Goodman RA, Gillespie C. Titles of articles in peer-reviewed journals lack essential information: a structured review of contributions to 4 leading medical journals, 1995 and 2001 . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Sather C, Nuovo J. Reporting methods of adverse events in randomised controlled trials . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Schwartz LM, Woloshin S, Dvorin EL, Welch HG. Ratio measures in leading medical journals: where are the underlying absolute risks ? Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Sim I, Olasov B. Trial bank publishing of randomised trials: preliminary results . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.(Kristina Fiter, Roger Robinson editorial)
Evidence started to matter in biomedical publishing soon after it came to matter in medicine—relatively recently. The first international congress on peer review and biomedical publication was held in Chicago in 1989. At the time of the third congress, in 1997, only 146 original scientific articles had been published on peer review, of which 22 were prospective studies and 11 randomised controlled trials.1 Since then, the body of evidence has been growing, with about 200 s indexed in Medline a year.2 We now have plenty of evidence to support the contention that peer review is "expensive, slow, subjective and biased, open to abuse, patchy at detecting important methodological defects, and almost useless at detecting fraud or misconduct."3 The evidence on how to improve the process is scarce. What did the fifth congress add?
Industry funding
Some of the presented research looked into what happens when the pharmaceutical industry sponsors meta-analyses—the top of the hierarchy of evidence. Yank and colleagues analysed the agreement between results and conclusions in 71 meta-analyses of antihypertensive drugs published between 1966 and 2002.4 In about a third, authors disclosed financial ties with the pharmaceutical industry. Meta-analyses sponsored by industry were five times more likely than those funded by other sources to report conclusions favouring the study drug when such conclusions were not supported by the results. Meta-analyses funded by academic institutions showed no disagreement between the results and conclusions. Richard Smith, former editor of the BMJ, said: "It's a marvellous study and very disturbing." This indicates an embarrassing editorial failure, commented Yank. But she refused to be drawn on the identity of the worst offending journals.
Another study compared quality and conclusions in pairs of meta-analyses of the same drugs for treating the same disease, one Cochrane systematic review and the other sponsored by the manufacturing drug company.5 Despite the limitations—only eight pairs of meta-analyses met the inclusion criteria and the study wasn't blinded—the results were compelling. None of the Cochrane reviews and all of the industry sponsored meta-analyses concluded without reservation that the study drug was better than the comparison treatment. "Patients—to the barricades," said Peter G?tzche towards the end of his presentation.
Gardner and Lidz provided further evidence of the pharmaceutical industry skewing the published literature. In their questionnaire study of 322 authors who published drug trials between 1998 and 2001, almost half stated that at some point in their careers they had participated in a trial that was never published.6 The sponsoring pharmaceutical company directly prevented publication of one in five of these trials, and trials that showed no difference were twice as likely to be unpublished as trials with positive results.
Peer review
Who then should assist editors in their decisions about what to publish? Many journals ask submitting authors to suggest or exclude potential reviewers, but is this a good thing? Two studies showed that although reviewers suggested by authors produce reviews comparable in quality and timeliness to those of reviewers suggested by editors, they are significantly more likely to recommend publication.7 8
Other factors that can increase authors' chances of publication are excluding a reviewer9 and citing a reviewer's previously published work.10 Egger and colleagues found a clear trend of reviewer's recommendations for publication becoming more favourable as their own publications were cited in the manuscript. The message for authors is clear: see whose work you have cited most in your paper, recommend those people as reviewers, ask the journal to exclude your known enemies, and hope for the best.
The process of blinding reviewers to authors' identities continues to give hope to some. In a before and after study, Ross and colleagues studied the effect of removing authors' names and affiliations from s being peer reviewed for inclusion in the annual scientific sessions of the American Heart Association.11 Before 2002, when reviewers were aware of author's identities, 41% of accepted s originated from US institutions. The proportion fell to 33% after blinding was introduced, and acceptance rates for s from non-US institutions and non-English speaking countries rose significantly. The authors called for the universal adoption of blind peer reviewing of s submitted for scientific meetings. Such an intervention, however, is unlikely to help journals and grant committees as larger pieces of work become highly recognisable—previous trials have shown that up to 46% of reviewers successfully identify authors of research articles.12
How much these studies add to improving the quality of peer review remains unclear. Drummond Rennie, the guiding force behind the congresses and deputy editor of JAMA, repeated his call for more qualitative research into the cognitive processes involved in peer review.2 Such research was in short supply in Chicago. One study analysed editorial meetings at JAMA over two months and reported that editors' highest priority was the scientific merit of papers, followed closely by meeting the needs of readers and timeliness.13 Quality of writing and significance of results were discussed less often. The other qualitative study examined reviewers' attitudes and values and reported that many reviewers challenged conventional beliefs about the purpose and process of peer review.14
Impact factor
Perhaps the aspect of biomedical publication most directly challenged at the congress was the journal impact factor. Eugene Garfield, the father of the impact factor, confirmed that this measure has been misinter-preted and abused almost since he first started using it in 1963 to select journals for inclusion in the Institute of Scientific Information's Science Citation Index. It is calculated each year by dividing the number of citations to all articles published in the journal by the number of citable items. The impact factor can be manipulated by journal self citations and by fiddling with the number of citable articles. Garfield explained other measures that can help expose the imperfections of the impact factor. Yet despite these imperfections and being a poor measure of individual scientist's performance,15 impact factor is still important for academic advancement and journal prestige.
How realistically does the impact factor reflect journals' true impact? Chew and colleagues from the Medical Journal of Australia showed that, over the past 11 years, all the main general medical journals saw their impact factors rise.16 However, for Annals of Internal Medicine, JAMA, and the Lancet, the rise was largely due to falls in denominators—that is, the number of citable items—whereas at the other journals (BMJ, CMAJ, Medical Journal of Australia, and the New England Journal of Medicine) the rise in impact factor was mostly due to increases in the number of citations. Chew and colleagues also interviewed the editors of these journals; two admitted to having a deliberate policy to publish fewer articles in order to increase the impact factor. All the editors who were interviewed agreed that the impact factor is a "mixed blessing—attractive to researchers but not the best measure of clinical impact."
Smaller journals may have to adopt other strategies to raise their impact factor. A study by Sahu and colleagues suggested that open access might be a powerful means for small journals to increase their visibility, citations, and consequently impact factor.17 Citations of articles published in the Journal of Postgraduate Medicine between 1990 and 1999 rose significantly after the journal went open access in 2001. Half of the articles were first cited only after open access was introduced.
Research into authorship again adds to the grim bits of the picture that the evidence is outlining. Almost 70% of corresponding authors of papers published in the Croatian Medical Journal declare their contributions differently if asked about them twice.18 Furthermore, up to 60% of authors don't fulfil the International Committee of Medical Journal Editors' criteria for authorship after declaring their contributions in a categorical or open ended form. However, if given a more leading type of form, authors will do better at stating their contributions and less than 20% will not fulfil the criteria.
Solutions for a better future
The most tangible take home messages for editors came from research that focused on improving the presentation of studies and journals' advice to contributors. Editors were urged to introduce mini-CONSORT for reporting randomised controlled trials in s,19 guidelines for reporting crossover trials,20 and a uniform system for grading the published evidence.21 They were also asked to raise the quality of reporting relative risks and odds ratios in s,22 improve statistical and methodological content of the advice to contributors,23 24 and rigorously implement existing policies.25-27
Summary points
The body of evidence on peer review is steadily growing
The drug industry is successfully skewing the literature in its favour
It pays to recommend or exclude reviewers when submitting a paper for publication
Many journals fiddle with their impact factors
Honorary authors remain prevalent
Apart from improving the quality of published literature, better reporting should speed up the advent of trial banks—open access electronic knowledge bases that can capture in detail aspects of trial design, execution, and results in a form that computers can understand.28 Decision support systems can then use these data more selectively, providing clinician friendly computer assistance for critical appraisal and evidence based practice. Sim reported that trialists found it easier to enter their data into the trial bank than to write a traditional research paper, and that readers found it easier to extract information about the trial—surely a sign that the days of journals reporting trials are numbered.
Sim went on to describe a compelling vision of the future in which electronic patient records are dragged and dropped into databases and automatically open up boxes with relevant evidence for managing that particular patient. It may not be long before such powerful systems are on doctors' desktops. Professional and moral responsibility to ensure that those boxes contain reliable information that will most benefit patients lies with us all.
Competing interests: None declared.
References
Overbeke J. The state of evidence: what we know and what we don't know about journal peer review. In: Godlee F, Jefferson T, eds. Peer review in health sciences. London: BMJ Books, 1999: 32-44.
Rennie D. Fourth international congress on peer review in biomedical publication. JAMA 2002;287: 2759-60.
Godlee F, Jefferson T. Introduction. In: Godlee F, Jefferson T, eds. Peer review in health sciences. London: BMJ Books, 1999: xi-v.
Yank V, Rennie D, Bero L. Are authors' financial ties with pharmaceutical companies associated with positive results or conclusions in metaanalyses on antihypertensive medications ? Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
J?rgensen AW, G?tzche P. Sponsorship, bias, and methodology: Cochrane reviews compared with industry-sponsored meta-analyses of the same drugs . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Gardner W, Lidz CW. Failures to publish pharmaceutical clinical trials . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Schroter S, Tite L, Hutchings A, Black N. Comparison of author and editor suggested reviewers in terms of review quality, timeliness, and recommendations for publication . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Wager E, Parkin EC, Tamber PS. Are reviewers suggested by authors as good as those chosen by editors? Results of a rater-blinded, retrospective study . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Goldsmith LA, Blalock E, Bobkova H, Hall RP. Effect of authors' suggestions concerning reviewers on manuscript acceptance . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Egger M, Wood L, von Elm E, Wood A, Shlomo YB, May M. Are reviewers influenced by citations of their own work? Evidence from the International Journal of Epidemiology . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Ross JS, Gross CP, Hong Y, Grant AO, Daniels SR, Hachinski VC et al. Assessment of blind peer review on s acceptance for scientific meetings . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Fisher M, Friedman SB, Strauss B. The effects of blinding on acceptance of research papers by peer review. JAMA 1994;272: 143-6.
Dickersin K, Mansell C. Rethinking publication bias: developing a scheme for classifying editorial discussion . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Callaham M, Tercier J. Qualitative profile of journal peer reviewers and predictors of peer reviewer quality . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Smith R. Unscientific practice flourishes in science. BMJ 1998;316: 1036-40.
Chew M, Van Der Weyden M, Villanueva EV. More than a decade in the life of the impact factor . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Sahu DKR, Gogtay NJ, Bavdekar SB. Effect of open access on citation rates for a small biomedical journal . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Marusic A, Bates T, Anic A, Ilakovac V, Marusic M. In the eye of the beholder: contribution disclosure practices and inappropriate authorship . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Hopewell S, Clarke M. Trials reported in s: the need for a mini-CONSORT . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Mills EJ, Chan A, Wu P, Guyatt GH, Altman DG. Design, analysis, and presentation of crossover trials . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Scott JR, Rinehart R, Spong CY. Grading the evidence of published papers for the benefit of clinicians . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
G?tzche P. Are relative risks and odds ratios in s believable ? Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Altman DG, Schriger DL. The statistical and methodological content of journals' instructions for authors . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Schriger DL, Arora S, Schringer VA, Altman DG. An analysis of the content of medical journals' instructions for authors . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Siegel PZ, Thacker SB, Goodman RA, Gillespie C. Titles of articles in peer-reviewed journals lack essential information: a structured review of contributions to 4 leading medical journals, 1995 and 2001 . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Sather C, Nuovo J. Reporting methods of adverse events in randomised controlled trials . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Schwartz LM, Woloshin S, Dvorin EL, Welch HG. Ratio measures in leading medical journals: where are the underlying absolute risks ? Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.
Sim I, Olasov B. Trial bank publishing of randomised trials: preliminary results . Proceedings of the 5th international congress on peer review and biomedical publication. Chicago, September 2005.(Kristina Fiter, Roger Robinson editorial)