Advertisement
Quantifying clinical relevance in treatments for psychiatric disorder Brief report| Volume 33, ISSUE 12, PB11-B15, December 01, 2011

Measuring Clinical Relevance and Impact in Journal Publishing: A Proposed Model and Publisher's Perspective

      Introduction

      This perspective article discusses how publishers might measure clinical relevance and impact following publication. It is written from a publisher's point of view and aims to prompt discussion and debate rather than to be an end point in and of itself. We do not attempt to discuss assessment of clinical relevance and impact prior to publication. We propose that the journal ecosystem itself, to a degree, applies this filter: authors with work that they believe to be clinically relevant submit that work to journals that they believe reach a clinical audience. This warrants further investigation and discussion.

      What is Clinical Relevance?

      Good information saves lives, and bad information kills people. As a result, debating how to measure and describe research information's clinical relevance is important, and directly impacts clinicians' and patients' lives. From a publisher's perspective, clinical relevance describes the importance of information to clinicians: anyone involved in patient care. If we as publishers could easily and transparently measure clinical relevance (whether by journals, books, digitally, and via any means that we use to make medical information available to clinicians), this would enable busy clinicians to focus their clinical reading, stay up to date with new developments and deliver best practice, and spend more time doing what they do best: helping ill people. How to measure the level of clinical relevance, however, is complex. It presents a formidable challenge to publishers around the world, with little demonstrable success to date.
      Publishers are usually comfortable with traditional measures of relevance for the research community. These traditional measures are most often derived from citation indexes (like Impact Factor, Eigenfactor, H index, and the individual citations gathered by articles). Quantifying relevance of information to a practicing clinical community is much more difficult and much further from traditional publishers' comfort zones. Single pieces of research occasionally lead to widespread change in clinical practice, but this is rare. More commonly, the clinical relevance of published information falls on a spectrum ranging from having no clinical relevance at all through to content which in itself, or as part of a more complex information landscape, has some level of impact on what clinicians think or do. Furthermore, clinical relevance itself will vary between individual clinicians and between clinical groups, and so views of this clinical relevance spectrum will also vary considerably.

      What Broad Types of Research May Be Clinically Relevant?

      When we discuss clinical relevance, we are generally thinking about the following three kinds of research: (1) research that changes, or ought to change, practice (either answering a new question or changing the answer to an existing question)
      • Roberts I.
      • Yates D.
      • Sandercock P.
      • et al.
      Effect of intravenous corticosteroids on death within 14 days in 10008 adults with clinically significant head injury (MRC CRASH trial): randomised placebo-controlled trial.
      Scandinavian Simvastatin Survival Study Group
      Randomised trial of cholesterol lowering in 4444 patients with coronary heart disease: the Scandinavian Simvastatin Survival Study (4S).
      • Denison Davies E.
      • Schneider F.
      • Childs S.
      A prevalence study of errors in opioid prescribing in a large teaching hospital.
      ; (2) research that consolidates existing practice, reassuring clinicians they are doing the right thing
      • Wensley C.
      • Kent B.
      • McAleer M.
      • et al.
      Pain relief for the removal of femoral sheath in interventional cardiology adult patients.
      ; and (3) research that is simply of interest to clinicians but has no immediate impact on practice.

      How Do We Determine Which Individual Pieces or Groups of Information are Clinically Relevant?

      First, it is necessary to distinguish between relevance and validity. Despite a high level of internal validity, in a trial with perfect methodology and completely correct and accurate conclusions, an article can still present research that is not clinically relevant (ie, lacking external validity). Conversely, an article may appear to be highly clinically relevant (even practice changing), but if the methodology or conclusions are not valid, any relevance to clinical practice will be limited. Validity also represents a spectrum—a paper is unlikely to be completely “wrong,” but it is important to determine whether any problems identified within a paper are important with respect to the question asked by a clinician.
      One way to determine if a piece of research is deemed clinically relevant by a medical community could be to identify whether that research is included in a valid clinical guideline. However, although guidelines and those who produce them often have a profoundly positive impact on health, guideline production varies globally and not every guideline is perfect. Important debate continues globally about guideline quality and the transparency with which guideline authors and guideline creation processes operate. Some guidelines may miss key pieces of research, some may be outdated (eg, those for the use of ultrasonography to detect potential miscarriage
      • Jeve Y.
      • Rana R.
      • Bhide A.
      • Thangaratinam S.
      Accuracy of first-trimester ultrasound in the diagnosis of early embryonic demise: a systematic review.
      • Abdallah Y.
      • Daemen A.
      • Kirk E.
      • et al.
      Limitations of current definitions of miscarriage using mean gestational sac diameter and crown–rump length measurements: a multicenter observational study.
      ), and some may erroneously incorporate “bad” research. Some guidelines fail to report key conflicts of interest,
      • Ioannidis J.P.
      Why most published research findings are false.
      and there is a view that all research results are false.
      • Godlee F.
      Don't just swallow, check the evidence first.
      What is needed, therefore, is a more objective approach to measuring clinical relevance.
      There are several indexes (formal and informal) that are used often to determine whether something is clinically relevant, including Impact Factor and the Immediacy Index, from Thomson Reuters' Journal Citation Reports; databases like the Faculty of 1000, with their post-publication rating scales; journal reputation; and usefulness in practice. These are all, to varying degrees, possible ways to consider clinical relevance, but all are limited, and either alone or in combination do not fully and objectively quantify clinical relevance. For example, journal reputation and usefulness in practice are subjective measures that vary regionally and are prone to rapid change. If a respected journal loses credibility, does that make all previously published research in that journal less clinically relevant? If a technique used in surgery is supplanted by a new technique, does the previously widely used technique become less clinically relevant? Maybe.
      Perhaps one good indication of relevance is immediacy of use. Newer research can confirm, augment, or replace older research. New technology can replace old methods of practice. This would suggest that the Immediacy Index and, therefore, Impact Factor, are important components in measuring clinical relevance, albeit most often to researchers. However, just because something is cited soon after its publication does not mean it is always clinically relevant. Caution should also be applied: rapidly and widely adopting new research can carry risks,
      • Nissen S.E.
      • Wolski K.
      Effect of rosiglitazone on the risk of myocardial infarction and death from cardiovascular causes.
      • Curfman G.D.
      • Morrissey S.
      • Drazen J.M.
      • et al.
      Expression of concern: Bombardier “Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis,”.
      as can adopting recommendations from new research too slowly.
      • Bykerk V.
      • Emery P.
      Delay in receiving rheumatology care leads to long-term harm.
      Another component in the assessment of clinical relevance is the population to which the information in question refers. Much like this article would not be relevant to a group of farmers, an article about cardiovascular surgery is unlikely to have high clinical relevance to a group of ophthalmologists. In other words, results from the study population must be generalizable to a clinician's own population of interest if the information is to move up the clinical relevance spectrum for that clinician.
      Another consideration is clinical importance. To be clinically relevant, information has to answer a question which in itself is important to patients or clinicians. To paraphrase well-known evidence-based medicine lore: “If you ask the wrong question, you will get the wrong answer.”
      Finally, to be clinically relevant, research has to suggest a management approach that is practical for both patients and clinicians. Even if results are current, important, and refer to the appropriate population, if they suggest an approach that is impossible for whatever reason for either clinician or patient, they will be of little clinical relevance to either group.

      Constructing a Measure of Clinical Relevance With What We Have

      To construct a measure of clinical relevance from what we already have, we need a framework within which to work. A recent series of sometimes provocative editorials presented visions of clinical relevance from 4 different specialties (a note to readers: these editorials are from a journal published by one of the authors of this paper; while the authors recognize the potential bias, they believe that the editorials serve an appropriate purpose in this discussion). The editorials, all written by clinicians, argue that (1) for research to be clinically relevant, it needs to be published (fast), where clinicians can and do read it (or use it), and be written in ways that are useful for practitioners
      • Costenbader K.H.
      Rheumatic disease research and implications for clinical care.
      ; (2) authors writing for clinicians should interpret their research in ways that are meaningful for practitioners (using techniques like “number needed to treat”)
      • Citrome L.
      Does it work, will it work, and is it worth it? A call for papers (and reviewers) regarding effective treatments for psychiatric disorders.
      ; (3) high-grade evidence, like systematic reviews, should play a central role in the transfer of evidence into practice
      • Minhas R.
      Not a call for papers – a call for systematic reviews!.
      ; and (4) the seeds of innovation, often only seen in smaller (albeit perhaps somehow flawed) studies, should be encouraged through a more receptive response to creative, innovative clinical researchers.
      • Wierzbicki A.S.
      Cardiovascular disease and the seeds of innovation: a call for papers.
      Papers identifying innovative approaches and interventions should be highlighted as such, to emphasize the caution required when considering the application of their findings. Extending these arguments in a way that risks being too reductive, these philosophies suggest that a measure of clinical relevancy might be created from a composite of accessibility and suitability, meaning and utility, internal validity and generalizability, and innovation and creativity (Table I).
      Table IFramework for a composite measure of clinical relevance derived from the addition of 4 equally weighted scores to give a possible total score of 40.
      ParameterCommentsScore
      Accessibility and suitabilityInformation usage among clinical readers

      Given the right technology this could be measured digitally

      Alternatively, a predictive (rather than reflective, post hoc) surrogate measure could be the circulation figures to clinical audiences of the journal or website that publishes the information
      Maximum score: 10 points

      Range, 0–10 points

      (0 points: accessed exclusively by researchers; 10 points: accessed exclusively by clinicians)
       Meaning and utility A score to record whether a clear statement of guidance for clinical audiences is incorporated into the information, and whether clinically useful statistical measures, such as number needed to treat, are used to interpret results Maximum score: 10 points

      0 points: no clinical guidance statement

      5 points: clinical guidance statement included

      Plus

      0 points: no clinically useful statistical measures used

      5 points: clinically useful statistical measures included
       Internal validity and generalizability Measured by ranking the quality of evidence, including an assessment of methodological rigor, and descriptive information about “population match” (ie, how similar the study population is to the population to whom the conclusions should be applied). If there are population differences, consideration should be given as to whether the exact nature of these differences will affect the outcome of interest, and, if so, in what way Maximum score: 10

      Adapted from “Levels of Evidence (March 2009)” Oxford Centre for Evidence-based Medicine scale; see reference for full definitions of terms and detail.
      Centre for Evidence-based Medicine. Oxford
      Levels of evidence.


      2 points: “Level 5 evidence or troublingly inconsistent or inconclusive studies of any level”
      Centre for Evidence-based Medicine. Oxford
      Levels of evidence.


      3 points: “Level 4 studies or extrapolations from level 2 or 3 studies”
      Centre for Evidence-based Medicine. Oxford
      Levels of evidence.


      4 points: “Consistent level 2 or 3 studies or extrapolations from level 1 studies”
      Centre for Evidence-based Medicine. Oxford
      Levels of evidence.


      5 points: “Consistent level 1 studies”
      Centre for Evidence-based Medicine. Oxford
      Levels of evidence.


      PLUS

      0 to 5 points

      (0 points: population not similar to the population to whom the conclusions should be applied; 5 points: population identical to the population to whom the conclusions should be applied)
       Innovation A score reflecting need and novelty, as measured by whether a clinical problem needs answering among the intended audience, or whether it has already been answered adequately, making further identical experimental research unethical
      • Young C.
      • Horton R.
      Putting clinical trials into context.
       Maximum score: 10

      0: not needed, already answered

      10: needed

       Total Total score: 0–40
      Even a framework like this is open to subjective interpretation, and perhaps it should be; What is relevant in one healthcare setting for one doctor with one patient may be extensible to other settings, but alternatively because of population, preference, or resource differences, may be irrelevant to other people and need throwing out the window!

      How Do We Trust That Something Considered to Be Clinically Relevant Actually is?

      As electronic medical records and clinical decision support systems progress, the future holds more direct and immediate assessments of whether measures of clinical relevance are accurate. Clinical analytic functions of electronic medical record systems will allow real-time or near real-time measurement of whether a new piece of information has improved an important quality outcome, and whether that information matches up to its clinical relevance score. Furthermore, within the next 3 years it will become increasingly common for health care providers to be assessed and judged on quality outcomes, rather than on historical measures of patient turnover or throughput.
      At present, this rapid real-life measurement of information impact is not possible, or at least not possible in the majority of health care settings. As a result we must often rely on less direct measures, such as changes in population health or anecdotal evidence from clinicians or smaller clinical groups about whether information helped improve important outcomes.

      Conclusion: What Next?

      Clinical autonomy has been hotly debated since the time of Aristotle, and the debate continues today. Should individual clinicians decide which information they believe and which they do not? Should those individual clinicians be able to pick and choose which pieces of information they put into practice, and which they ignore? Or should some higher and wiser authority tell every clinician that because a new piece of research is highly clinically relevant to a particular patient population, every clinician working with that population must do exactly what that information says … or else! This crucial social debate will not be resolved here, sadly. However, it is clear that medical publishers, like us, could and should do a lot more to help save lives if they consistently and transparently helped clinicians identify pieces of clinically relevant information that could or should have a direct and, sometimes, immediate impact on clinical practice.

      Acknowledgments

      All authors are employed by John Wiley & Sons, and, as such, benefit from the success of the company's publishing program. J. Ballard publishes clinical and research journals, including several for societies and royal colleges in Australia and New Zealand and throughout the Asia-Pacific region. C. Graf publishes clinical and research journals, including several for societies and royal colleges in Australia and New Zealand, the International Journal of Clinical Practice in the United Kingdom (the editor of this special feature for Clinical Therapeutics, Les Citrome, is senior editor of International Journal of Clinical Practice), the global Wiley Open Access journals in health sciences, and is Treasurer of Committee on Publication Ethics (a United Kingdom registered charity) and leads the Wiley-Blackwell publication ethics program. C. Young publishes Wiley-Blackwell's Global Clinical Solutions products, which support clinical decision making at the point of care.

      References

        • Roberts I.
        • Yates D.
        • Sandercock P.
        • et al.
        Effect of intravenous corticosteroids on death within 14 days in 10008 adults with clinically significant head injury (MRC CRASH trial): randomised placebo-controlled trial.
        Lancet. 2004; 364: 1321-1328
        • Scandinavian Simvastatin Survival Study Group
        Randomised trial of cholesterol lowering in 4444 patients with coronary heart disease: the Scandinavian Simvastatin Survival Study (4S).
        Lancet. 1994; 344: 1383-1389
        • Denison Davies E.
        • Schneider F.
        • Childs S.
        A prevalence study of errors in opioid prescribing in a large teaching hospital.
        Int J Clin Pract. 2011; 65: 923-929
        • Wensley C.
        • Kent B.
        • McAleer M.
        • et al.
        Pain relief for the removal of femoral sheath in interventional cardiology adult patients.
        Cochrane Database System Rev. 2008; 4 (CD006043)
        • Jeve Y.
        • Rana R.
        • Bhide A.
        • Thangaratinam S.
        Accuracy of first-trimester ultrasound in the diagnosis of early embryonic demise: a systematic review.
        Ultrasound Obstet Gynecol. 2011; 38: 489-496
        • Abdallah Y.
        • Daemen A.
        • Kirk E.
        • et al.
        Limitations of current definitions of miscarriage using mean gestational sac diameter and crown–rump length measurements: a multicenter observational study.
        Ultrasound Obstet Gynecol. 2011; 38: 497-502
        • Ioannidis J.P.
        Why most published research findings are false.
        PLoS Med. 2005; 2: e124
        • Godlee F.
        Don't just swallow, check the evidence first.
        BMJ. 2011; 343: d4478
        • Davis Phil
        F1000 Journal Rankings—The Map Is Not the Territory.
        (Accessed October 31, 2011)
        • Nissen S.E.
        • Wolski K.
        Effect of rosiglitazone on the risk of myocardial infarction and death from cardiovascular causes.
        N Engl J Med. 2007; 356: 2457-2471
        • Curfman G.D.
        • Morrissey S.
        • Drazen J.M.
        • et al.
        Expression of concern: Bombardier “Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis,”.
        N Engl J Med. 2005; 353 (N Engl J Med 2000;343:1520-8): 2813-2814
        • Bykerk V.
        • Emery P.
        Delay in receiving rheumatology care leads to long-term harm.
        Arthritis Rheum. 2010; 62: 3519-3521
        • Costenbader K.H.
        Rheumatic disease research and implications for clinical care.
        Int J Clin Pract. 2011; 65: 637-638
        • Citrome L.
        Does it work, will it work, and is it worth it?.
        Int J Clin Pract. 2011; 65: 232-233
        • Minhas R.
        Not a call for papers – a call for systematic reviews!.
        Int J Clin Pract. 2011; 65: 518
        • Wierzbicki A.S.
        Cardiovascular disease and the seeds of innovation: a call for papers.
        Int J Clin Pract. 2011; 65: 107-108
        • Centre for Evidence-based Medicine. Oxford
        Levels of evidence.
        (Accessed September 12, 2011)
        • Young C.
        • Horton R.
        Putting clinical trials into context.
        Lancet. 2005; 366: 107-108