What is Clinical Relevance?
Good information saves lives, and bad information kills people. As a result, debating how to measure and describe research information's clinical relevance is important, and directly impacts clinicians' and patients' lives. From a publisher's perspective, clinical relevance describes the importance of information to clinicians: anyone involved in patient care. If we as publishers could easily and transparently measure clinical relevance (whether by journals, books, digitally, and via any means that we use to make medical information available to clinicians), this would enable busy clinicians to focus their clinical reading, stay up to date with new developments and deliver best practice, and spend more time doing what they do best: helping ill people. How to measure the level of clinical relevance, however, is complex. It presents a formidable challenge to publishers around the world, with little demonstrable success to date.
Publishers are usually comfortable with traditional measures of relevance for the research community. These traditional measures are most often derived from citation indexes (like Impact Factor, Eigenfactor, H index, and the individual citations gathered by articles). Quantifying relevance of information to a practicing clinical community is much more difficult and much further from traditional publishers' comfort zones. Single pieces of research occasionally lead to widespread change in clinical practice, but this is rare. More commonly, the clinical relevance of published information falls on a spectrum ranging from having no clinical relevance at all through to content which in itself, or as part of a more complex information landscape, has some level of impact on what clinicians think or do. Furthermore, clinical relevance itself will vary between individual clinicians and between clinical groups, and so views of this clinical relevance spectrum will also vary considerably.
How Do We Determine Which Individual Pieces or Groups of Information are Clinically Relevant?
First, it is necessary to distinguish between relevance and validity. Despite a high level of internal validity, in a trial with perfect methodology and completely correct and accurate conclusions, an article can still present research that is not clinically relevant (ie, lacking external validity). Conversely, an article may appear to be highly clinically relevant (even practice changing), but if the methodology or conclusions are not valid, any relevance to clinical practice will be limited. Validity also represents a spectrum—a paper is unlikely to be completely “wrong,” but it is important to determine whether any problems identified within a paper are important with respect to the question asked by a clinician.
One way to determine if a piece of research is deemed clinically relevant by a medical community could be to identify whether that research is included in a valid clinical guideline. However, although guidelines and those who produce them often have a profoundly positive impact on health, guideline production varies globally and not every guideline is perfect. Important debate continues globally about guideline quality and the transparency with which guideline authors and guideline creation processes operate. Some guidelines may miss key pieces of research, some may be outdated (eg, those for the use of ultrasonography to detect potential miscarriage
- Jeve Y.
- Rana R.
- Bhide A.
- Thangaratinam S.
Accuracy of first-trimester ultrasound in the diagnosis of early embryonic demise: a systematic review.
- Abdallah Y.
- Daemen A.
- Kirk E.
- et al.
Limitations of current definitions of miscarriage using mean gestational sac diameter and crown–rump length measurements: a multicenter observational study.
), and some may erroneously incorporate “bad” research. Some guidelines fail to report key conflicts of interest,
Why most published research findings are false.
and there is a view that all research results are false.
Don't just swallow, check the evidence first.
What is needed, therefore, is a more objective approach to measuring clinical relevance.
There are several indexes (formal and informal) that are used often to determine whether something is clinically relevant, including Impact Factor and the Immediacy Index, from Thomson Reuters' Journal Citation Reports; databases like the Faculty of 1000,
F1000 Journal Rankings—The Map Is Not the Territory.
with their post-publication rating scales; journal reputation; and usefulness in practice. These are all, to varying degrees, possible ways to consider clinical relevance, but all are limited, and either alone or in combination do not fully and objectively quantify clinical relevance. For example, journal reputation and usefulness in practice are subjective measures that vary regionally and are prone to rapid change. If a respected journal loses credibility, does that make all previously published research in that journal less clinically relevant? If a technique used in surgery is supplanted by a new technique, does the previously widely used technique become less clinically relevant? Maybe.
Perhaps one good indication of relevance is immediacy of use. Newer research can confirm, augment, or replace older research. New technology can replace old methods of practice. This would suggest that the Immediacy Index and, therefore, Impact Factor, are important components in measuring clinical relevance, albeit most often to researchers. However, just because something is cited soon after its publication does not mean it is always clinically relevant. Caution should also be applied: rapidly and widely adopting new research can carry risks,
Effect of rosiglitazone on the risk of myocardial infarction and death from cardiovascular causes.
- Curfman G.D.
- Morrissey S.
- Drazen J.M.
- et al.
Expression of concern: Bombardier “Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis,”.
as can adopting recommendations from new research too slowly.
Delay in receiving rheumatology care leads to long-term harm.
Another component in the assessment of clinical relevance is the population to which the information in question refers. Much like this article would not be relevant to a group of farmers, an article about cardiovascular surgery is unlikely to have high clinical relevance to a group of ophthalmologists. In other words, results from the study population must be generalizable to a clinician's own population of interest if the information is to move up the clinical relevance spectrum for that clinician.
Another consideration is clinical importance. To be clinically relevant, information has to answer a question which in itself is important to patients or clinicians. To paraphrase well-known evidence-based medicine lore: “If you ask the wrong question, you will get the wrong answer.”
Finally, to be clinically relevant, research has to suggest a management approach that is practical for both patients and clinicians. Even if results are current, important, and refer to the appropriate population, if they suggest an approach that is impossible for whatever reason for either clinician or patient, they will be of little clinical relevance to either group.
Constructing a Measure of Clinical Relevance With What We Have
To construct a measure of clinical relevance from what we already have, we need a framework within which to work. A recent series of sometimes provocative editorials presented visions of clinical relevance from 4 different specialties (a note to readers: these editorials are from a journal published by one of the authors of this paper; while the authors recognize the potential bias, they believe that the editorials serve an appropriate purpose in this discussion). The editorials, all written by clinicians, argue that (1) for research to be clinically relevant, it needs to be published (fast), where clinicians can and do read it (or use it), and be written in ways that are useful for practitioners
Rheumatic disease research and implications for clinical care.
; (2) authors writing for clinicians should interpret their research in ways that are meaningful for practitioners (using techniques like “number needed to treat”)
Does it work, will it work, and is it worth it? A call for papers (and reviewers) regarding effective treatments for psychiatric disorders.
; (3) high-grade evidence, like systematic reviews, should play a central role in the transfer of evidence into practice
Not a call for papers – a call for systematic reviews!.
; and (4) the seeds of innovation, often only seen in smaller (albeit perhaps somehow flawed) studies, should be encouraged through a more receptive response to creative, innovative clinical researchers.
Cardiovascular disease and the seeds of innovation: a call for papers.
Papers identifying innovative approaches and interventions should be highlighted as such, to emphasize the caution required when considering the application of their findings. Extending these arguments in a way that risks being too reductive, these philosophies suggest that a measure of clinical relevancy might be created from a composite of accessibility and suitability, meaning and utility, internal validity and generalizability, and innovation and creativity (Table I
Table IFramework for a composite measure of clinical relevance derived from the addition of 4 equally weighted scores to give a possible total score of 40.
Even a framework like this is open to subjective interpretation, and perhaps it should be; What is relevant in one healthcare setting for one doctor with one patient may be extensible to other settings, but alternatively because of population, preference, or resource differences, may be irrelevant to other people and need throwing out the window!
How Do We Trust That Something Considered to Be Clinically Relevant Actually is?
As electronic medical records and clinical decision support systems progress, the future holds more direct and immediate assessments of whether measures of clinical relevance are accurate. Clinical analytic functions of electronic medical record systems will allow real-time or near real-time measurement of whether a new piece of information has improved an important quality outcome, and whether that information matches up to its clinical relevance score. Furthermore, within the next 3 years it will become increasingly common for health care providers to be assessed and judged on quality outcomes, rather than on historical measures of patient turnover or throughput.
At present, this rapid real-life measurement of information impact is not possible, or at least not possible in the majority of health care settings. As a result we must often rely on less direct measures, such as changes in population health or anecdotal evidence from clinicians or smaller clinical groups about whether information helped improve important outcomes.
Conclusion: What Next?
Clinical autonomy has been hotly debated since the time of Aristotle, and the debate continues today. Should individual clinicians decide which information they believe and which they do not? Should those individual clinicians be able to pick and choose which pieces of information they put into practice, and which they ignore? Or should some higher and wiser authority tell every clinician that because a new piece of research is highly clinically relevant to a particular patient population, every clinician working with that population must do exactly what that information says … or else! This crucial social debate will not be resolved here, sadly. However, it is clear that medical publishers, like us, could and should do a lot more to help save lives if they consistently and transparently helped clinicians identify pieces of clinically relevant information that could or should have a direct and, sometimes, immediate impact on clinical practice.