The Evidence Base for Orthopaedics & Sports Medicine

According to Stefan Lohmander and Ewa Roos, writing in an editorial in the British Medical Journal, the evidence base for orthopaedics and sports medicine is ‘scandalously poor in parts’. This is what they found:

Medicine rests on an uneven evidence base. Some interventions are supported by large multicentre randomised controlled trials that have a low risk of bias and are powered for hard endpoints—a high level of evidence. Others depend on retrospective observational data that provide a lower level of evidence. Yet others were theorised and considered biologically or mechanistically plausible and are heirlooms of “eminence based medicine.”

Some interventions are just plain wrong and have real costs and harms, without countervailing benefits. Medical reversals may occur when well done clinical trials, systematic reviews, and meta-analyses of trials find current practice to be no better than a lesser treatment or placebo. Classic examples of reversals are anti-arrhythmic drugs for patients with recent myocardial infarction and hormone replacement therapy for menopausal women.1

The evidence base for orthopaedics compares unfavourably with other fields of medicine. Only 20% of procedures are estimated to be supported by at least one low-risk-of-bias randomised controlled trial showing that surgery is superior to a non-operative alternative.2

A similar review of the evidence base for sports medicine is not available. However, a recent inventory of randomised trials published during 2013 in the American Journal of Sports Medicine and the British Journal of Sports Medicine points to only a modest level of evidence in support of sports medicine practice. Only about a third of more than 40 original randomised trials provided an approved trial registry identifier and only 25 of the 40 reports presented a clearly defined primary endpoint. In addition, all articles had problems with multiple comparisons in the statistical analysis and no clear strategy for dealing with them in the data analysis.3 As a result, the empirical support provided by these trials is overestimated and a sizeable proportion of interventions used in sports medicine may not be based on high level evidence.

The need for evidence in sports medicine practice will be discussed at a symposium at the Danish Sports Medicine meeting later this month ( In both orthopaedic surgery and sports medicine, it is unclear whether some surgical interventions are better than non-surgical alternatives or better than placebo in the form of sham surgery. Recent examples where surgical interventions were shown to confer no benefit over non-operative alternatives or sham surgery include arthroscopic surgery in middle aged and older people with persistent knee pain,4 surgical reconstruction of acute rupture of the anterior cruciate ligament in young active adults,5 and vertebroplasty to treat pain associated with vertebral fractures.6 A recent systematic review of the evaluation of surgery found that half of the studies that used placebo controls provided evidence against the continued use of the investigated surgical procedures.7 Trials without sham control may show benefit for the surgical procedure when compared with no treatment or ineffective non-operative treatment, or when the outcome is compared with pre-treatment status. But it is difficult to justify invasive surgery with associated risks simply to obtain an effect similar to the placebo effect of sham surgery.

Other interventions where innovation and dissemination have moved rapidly and far ahead of evaluation and evidence are arthroscopic surgery for hip problems and femoro-acetabular impingement, and injections of platelet enriched plasma preparations or autologous stem cells for musculoskeletal soft tissue injuries.8 9 Past experience, such as the use of bone morphogenetic protein-2 in spinal surgery and metal-on-metal implants for hip osteoarthritis, shows the risks of serious patient harms when moving too fast and too far beyond evidence.10 11 It also illustrates what has been termed Buxton’s law: “it is always too early [for rigorous evaluation], until, unfortunately, it’s suddenly too late.”12

To reduce the risk of future failures, we must encourage the stepwise introduction of new surgical procedures and devices.13 Best practice must routinely be based on evidence, shared decision making, and monitoring and analysis of outcomes. And ineffective practices must be eliminated, painful as it may be for their supporters.

Clinical impressions can be deceiving. Where high level evidence speaks against abundant clinical experience and ingrained and unquestioned routine, cognitive dissonance results.6 Defenders of questioned treatments focus on potential scientific flaws in the published trials to invalidate trial results and thereby to decrease their level of cognitive dissonance, while ignoring the inherent biases of clinical experience and the phenomenon of the physician as a placebo reactor. It was, for example, suggested that participants in sham controlled surgical trials “may not be of entirely sound mind” and research performed on such people “not generalisable to mentally healthy patients.”14

Confirmation bias reigns and we ignore, or do not want to be exposed to, information or opinions that challenge what we already believe, while wanting to hear information and beliefs that confirm what we already believe. This human trait contributes to overconfidence in personal beliefs and maintains and strengthens beliefs in the face of contrary evidence. The effects are stronger for emotionally charged issues and deeply entrenched views. As a result, proponents of questioned interventions fight hard for their interventions and specialties and often delay change, when the appropriate and ethical action would be to abandon ship.1 15


BMJ 2015;350:g7835

References in the article can be found on the BMJ website.


Leave a Reply