De Cochrane Collaboration en 'evidence-based medicine'

Opinie
J.P. Vandenbroucke
Citeer dit artikel als
Ned Tijdschr Geneeskd. 1995;139:1476-7

Zie ook het artikel op bl. 1478.

Het enthousiaste en motiverende artikel over de Cochrane Collaboration in dit tijdschrift bouwt voort op de successen die de afgelopen jaren behaald zijn met de metanalyse of het systematische literatuuroverzicht.1 De bedoeling is om de geneeskunde als geheel een basis te geven die (meer) op wetenschappelijk aangetoonde feiten is gestoeld: ‘evidence-based medicine’. Bij al dit enthousiasme is het nuttig om even stil te staan en ons af te vragen wat de wortels zijn van deze beweging.

Toevallig verschenen zeer recentelijk in The Lancet en British Medical Journal twee stukken, elk met een begeleidend redactioneel commentaar over wat metanalyse vermag.2-5 De conclusie van beide stukken en beide commentaren was totaal tegengesteld. In The Lancet konden wij lezen dat de metanalyse, het systematische en kwantitatieve literatuuroverzicht, het eigenlijk even goed deed als een grote gerandomiseerde trial: de uitslagen van een metanalyse lagen altijd…

Auteursinformatie

Academisch Ziekenhuis. afd. Klinische Epidemiologie, Postbus 9600, 2300 RC Leiden.

Prof.dr.J.P.Vandenbroucke, klinisch epidemioloog.

Heb je nog vragen na het lezen van dit artikel?
Check onze AI-tool en verbaas je over de antwoorden.
ASK NTVG

Ook interessant

Reacties

Brisbane (Australia), Oxford (England), november 1995,

There is now substantial evidence that clinical investigators are responsible for biased underreporting of research.1 Compared with studies yielding unremarkable point estimates of effects, studies which have yielded relatively dramatic point estimates of effects are more likely to be selected for presentation at scientific meetings;2 more likely to be reported in print;1 more likely to be published as full reports;3 more likely to be published in journals that are widely read;45 and more likely to be cited in reports of subsequent, related studies.6 These reporting patterns introduce both bias and imprecision into assessments of the effects of health care, and this phenomenon must inevitably mean that decisions taken in health care and health research are less well informed than they could be.

Opinions differ about the consequences – in practice – of selective reporting of well-designed clinical research. In a letter to The Lancet published in 1993, de Melker, Rosendaal and Vandenbroucke suggested that the importance of publication bias was being overstated and that it was unlikely to pose any substantive threat to the wellbeing of patients.7 Although it is not difficult to think of examples which challenge this view,89 that is not the reason that we feel prompted to write to Nederlands Tijdschrift voor Geneeskunde.

In a recent commentary published in this journal, Vandenbroucke challenges the view that selective reporting of well-designed clinical research is not only unscientific but unethical (1995;1476-7). As authors of two articles on the ethics of underreporting of clinical research (which were written independently of each other),1011 we wish to challenge Vandenbroucke's views.

In our view, investigators embarking on clinical research projects enter into implied if not formal contracts, both with the funding bodies through which the resources (often public) required to support their research have been provided, and with the individual patients who are recruited to participate in the research. We suggest that research funding bodies and the vast majority (if not all) of the patients participating in clinical research assume that their involvement is contributing to a growth in knowledge. This implied contract is violated by investigators who, having conducted well-designed research, fail to make the results of their investigations publicly available.

On both scientific and ethical grounds, we find it difficult to understand Vandenbroucke's position on selective reporting of well-designed research; but it is his apparent rejection of the ethical concerns raised by this practice we find most disturbing. It would be interesting to know the extent to which his views are shared by other researchers, research funding bodies, research ethics committees, clinicians and the public in general in the Netherlands.

J. Pearn
I. Chalmers
Literatuur
  1. Dickersin K, Min YI. NIH clinical trials and publication bias. Online J Curr Clin Trials 1993;Apr 28:Doc no 50.

  2. Koren G, Graham K, Shear H, Einarson T. Bias against the null hypothesis: the reproductive hazards of cocaine. Lancet 1989;ii: 1440-2.

  3. Scherer RW, Dickersin K, Langenberg P. Full publication of results initially presented in abstracts. A meta-analysis. JAMA 1994;272: 158-62.

  4. Simes J. Publication bias: the case for an international registry of cllinical trials. J Clin Oncol 1986;4:1529-41.

  5. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet 1991;337:867-72.

  6. Gøtzsche PC. Reference bias in reports of drug trials. BMJ 1987;295:654-6.

  7. Melker HE de, Rosendaal FR, Vandenbroucke JP. Is publication bias a medical problem? [letter]. Lancet 1993;342:621.

  8. Chalmers I. Publication bias [letter]. Lancet 1993;342:1116.

  9. Egger M, Davey Smith G. Misleading meta-analysis. BMJ 1995;310:752-4.

  10. Chalmers I. Underreporting research is scientific misconduct. JAMA 1990;263:1405-8.

  11. Pearn J. Publication: an ethical imperative. BMJ 1995;310:1313-5.

J.P.
Vandenbroucke

Leiden, december 1995,

The basic difference in our opinions is rather simple. It can be summarized in one sentence by Cornfield, in a theoretical paper on randomization and randomized controlled trials in 1976.1 Cornfield wrote that there are statistical issues in randomized trials for which unambiguous answers appear unattainable, and that this must be so theoretically. That the results of clinical trials can never be unambiguous is due to several mechanisms, mostly relating to prior and therefore subjective beliefs. The overenthusiastic and selective reporting of findings that raise hope and the quiet burial of findings that are disappointing, form only one aspect. How clinical trial results can ride upon a ‘random high’ or a ‘random low’ has been explored by Pocock and Hughes in the case of early stopping of a trial.2 These mechanisms also operate in trials that end at their preplanned endpoint, and in non-randomized studies. For example: side effects are discovered because of a cluster – most likely the cluster is a ‘random high’, and the real incidence will be lower, but it still helps us to discover the side effect. In the case of a clinical trial, a result that is spuriously high when there is no real difference (neither beneficial nor adverse) will give rise to an overenthusiastic report and to a temporary fashion in therapy or diagnosis that will neither do good, nor harm. One day or another, it will be quietly replaced.

Subjectivity is simply inescapable in all phases of clinical research: planning, execution, analyzing and reporting. The ultimate subjectivity is in the interpretation of published results. For example, I do not believe the results of the randomized trial that showed that smoking cessation does not prolong life3 – as a matter of fact, few epidemiologists ever discuss that trial or use it in any meta-analysis or overview about smoking and disease. It has been quietly buried. Similarly, I do completely ignore all trials showing that homeopathy does work, since such trials are randomizations between two placebos, and therefore in essence meaningless.

Understanding the world necessitates integration of new data into existing knowledge. That knowledge is imperfect, but so are the new data. This view is not new. Let me, in true Anglo-Saxon medical tradition, take refuge and solace in the writings of Sir William (Osler), for example in his 1906 Harveian Oration upon ‘The growth of Truth’.4 Mark the sentences outlining his ideas: ‘In the first place, like a living organism, Truth grows, and its gradual evolution may be traced from the tiny germ to the mature product (...) Truth may suffer all the hazards incident to generation and gestation’, and, ‘Secondly, all scientific truth is conditioned by the state of knowledge at the time of its announcement (...) as Harvey says: “opinion is the source of opinion” (...)’. I counsel the remainder of the text as salutary reading to those with greater passion than I have.

J.P. Vandenbroucke
Literatuur
  1. Cornfield J. Recent methodological contributions to clinical trials. Am J Epidemiol 1976;104:408-21.

  2. Pocock SJ, Hughes MD. Practical problems in interim analyses, with particular regard to estimation. Controlled clinical trials 1989;10:S209-21.

  3. Rose G, Hamilton PJS, Colwell L, Shipley MJ. A randomised controlled trial of anti-smoking advice: 10-year results. J Epidemiol Community Health 1982;36:102-8.

  4. Osler W. The growth of Truth: as illustrated in the discovery of the circulation of the blood. BMJ 1906;2:1077-84.