Assessing Impact of Biomedical Scholarship in the Information Age: Observations on the Evolution of Biomedical Publishing and a Proposal for a New Metric



 

Robert Hogan, MD1,2

Perm J 2019;23:18.037 [Full Citation]

https://doi.org/10.7812/TPP/18.037
E-pub: 08/22/2019

ABSTRACT

This editorial contains a discussion on the state of the art of biomedical publication and the history and development of indexing, its evolution, and complexity. A traditional method of journal assessment is in use—the journal impact factor—but it is compromised by well-documented deficiencies. Present-day alternatives to the journal impact factor are listed, and a proposal to develop a novel metric of merit in publication, the influence factor, is described.

INTRODUCTION

Scholarship is near and dear to the profession of medicine. As medical professionals, we are scientists first. We study, we learn, we observe, we theorize, we ponder; perhaps we involve ourselves in ethical scientific experiments (eg, clinical trials). When we have something to say deserving of a broad audience, we write. We weather peer review. We publish. With luck and a bit of fortitude, interesting feedback arrives from near and far, and knowledge is advanced. In the end, not only have we invested our time in what we produce but also much potentially rests on successful authorship. Career advancement, professional recognition, reputation, credibility, and expertise manifest—all potentially won or lost in the drive to produce important high-quality work and to have our work published in the right places. Every physician is in some sense an author. The clinician writes notes on every patient encounter, the consultant produces expert reports, the radiologist issues interpretations of imaging, and so on. Some are moved to formally enter their work into the world of scholarly literature and in so doing, join the fellowship of published biomedical authors, which is not a closed association. Some barriers exist to submission of scholarly work that are not insurmountable. Even when the path to publication is strenuous to tread, all in the broad enterprise are welcome to participate.

The rise of social media reflects the reality that the urge to be “heard” via written forums (authorship!) is widespread. Blogging, tweeting, email blasting, and Facebook posting appear to be surrogates for or potential precursors to the more complex and laborious traditional forms of scholarly endeavor described.

For authors, submission of our work to the “right” publication is arguably nearly as important as the substance of the work itself. A superb article buried in an arcane publication might never obtain the recognition rightfully due. At worst, a young researcher might be seen as failing a vital test, lose a promotion to a higher academic standing, fail to attract grants, be seen as irrelevant, or miss the awarding of tenure. What then is the “right” publication? Well-known, long-established, prestigious journals such as the New England Journal of Medicine, the Journal of the American Medical Association, and the Annals of Internal Medicine have massive publication histories, deep resources, distinguished contributors, and enviable reputations. Passing peer review in those venues is daunting, to say the least, particularly early in career development. However, there are virtually thousands of titles of lesser recognition to choose from, depending on the intended audience, specialty orientation style, and content of a given work.

Somehow there is a matchup between authors and publications; authors find a satisfactory publication, editors find suitable authors, and publications and content flow. Thoughts move about on paper or electronically on the Internet, blogs, and as of late on social media. The speed with which ideas move is vastly different from earlier eras, as is the scope and scale of the movement of information. Curious observers should query not only the details of what moves where when, but ultimately what all the changes amount to.

For authors and editors alike, at the end of the day, surely the inevitable questions arise: What effect has our work had? Who has heard us, and who has incorporated our thoughts and conclusions into their own thinking? How good is our work, and how do we know? What influence have we had by publishing? How much does our scholarly scientific work matter, and how do we know that it does? One widely accepted historic indicator of the value of a published work is the journal impact factor, which is a rank of the average frequency that a journal’s articles are cited in other literature. Publishing work in high-impact-factor journals is traditional evidence of scholarly excellence.

For editors, manuscript selection—the drawing in, receiving, editorial processing, sorting, weeding out, facilitating improvement, and ultimately influencing the development of interesting, thought-provoking, genuinely useful manuscripts—may make or break a publication. Thus, the influence of authors and the influence of publications are inextricably intertwined; authors wishing to maximize the clout of their work wish to seek out prominent journals to submit to; journals wish to draw in good authors producing high-quality articles. This tidy interwoven system was unassailable for decades, until the world of information and how it moves transformed itself.

HISTORY OF THE STATE OF THE ART OF PUBLICATION

As an article is being developed, credibility demands that the author or authors place their work in context. For context, one must be familiar with other work germane to the topic, and to be familiar, one must have searched for and read other work in the field. Thus, to produce high-quality scholarly manuscripts, one must cite the literature. The process by which scholars have determined who has written what and where their writings may be found is changing. For my generation, trained in the 1970s, Index Medicus was the huge, encyclopedic go-to reference work. This massive print work was a catalog of the publications of the biomedical scholar’s world. As the volume of material in print grew, it became clear that production of larger and larger compendiums of printed matter was becoming unwieldy. At the same time, the biomedical scholar wishing to assemble a personal collection of important materials had few options. One could keep collections of journals bound or unbound, leading to massive shelves of paper, and strive to recall where important works were found within, or, more economically, one could tear out articles of individual interest and file them in some systematic way in hopes of locating them on demand. Keeping a personal file card system was another alternative.

Later, the National Library of Medicine began to produce MEDLINE, a computer-based storage cataloging and retrieval system to replace Index Medicus. Regarding what essentially was an enormous electronic file card system, I recall being dazzled to learn in the late 1970s that 6 million citations were contained in the early MEDLINE versions, which were directly available then only to librarians. A medical student in the 1970s could present a request for a literature search to the biomedical library staff, and in hours or days (depending on librarian workload), receive a dot matrix fanfold printout. In that now obsolete manner, one could, with a modest effort and some professional help, determine who had written what on a particular topic of interest theoretically drawn accurately from any of millions of citations. Today, MEDLINE’s descendant PubMed (www. pubmed.gov) lists more than 29 million citations, and the body of data is open to the Internet-using public.

But, who wrote what was not the only question of the day. When scholars wrote on important topics, they referred to others’ work. So also evolved the need to keep track of who cited whom. Being widely cited not only warms the heart of an author but suggests on its face that the author has clout, and in this respect, little has changed over the decades. What has changed are the rapidity with which an article is disseminated, the breadth of potential recognition and commentary that it may generate, and a vast expansion of venues for citation.

Current Contents was the tally sheet of the earlier day: A publication of publications, carrying copies of the title pages of many major biomedical journals. This, in turn, was intimately linked to an Author Citation Index—in other words, the quantification of “who published what where” and “who cited whom where?” For measurement we must have descriptors, organization, classification, and so on. Enter the somewhat arcane fields of journalology and scientometrics. These disciplines foster understanding about what journals do in the course of their work and how we quantify certain goings-on in the literature of the scientific world.

Thoughtful observers began early on to visualize the utility of some kind of organized methodical citation tracking system. In this environment arose the venerable journal impact factor. Conceptually described by Garfield1 in 1955, as originally conceived, the journal impact factor was a practical way of sorting citations according to their point of origin. The journal impact factor was conceived of as a means of helping librarians.1-3 In an elegant article, a mathematical model for calculating journal impact factor was described.1 This proprietary measurement method has been much discussed and over time came to be something other than what it was conceived as; it came to be seen as an indicator for journal quality. Garfield developed the Institute for Scientific Information in 1960, which also developed the highly useful and then well-respected Current Contents. Publishing in high-impact-factor journals became a surrogate marker of academic excellence.

To place all this in context, in 1955, one might mimeograph typewritten pages and hand distribute perhaps 100 copies, but there were no faxes, no copiers, no email, no blogs, and of course no Web sites because the Internet did not exist. At the same moment in history, slide rules were necessary for rapid calculations, sextants were used to determine position in unknown territory, and all telephones were hardwired and the lowest-cost connection plans involved party lines (in which a human operator made manual switches and connections among users). Book publishers and newspapers were a pathway for the distribution of material, but formidable barriers existed to both the simple reproduction of the printed word and its distribution. A successful scientific textbook might have reached tens of thousands of readers over its lifetime. At the time of impact factor initiation, email had not been invented; Web sites were where arachnids hung out; blogs weren’t even a science fiction, pie-in-the-sky dream; and tweeting is what winged creatures of the air did. Not every information system developed in the 1950s has become obsolete in the Information Age, but one must ponder whether the original concept remains as important and legitimate 60 years later as it was in its heyday.

When the impact factor was conceived, the pathways to the recognition of an individual’s work were limited. Furthermore, the movement of scholarly information was via a slow, narrow, concisely defined path: Completion of a work, submission to 1 or more of a small number of publications for consideration, peer review, and publication.

A strength of the journal impact factor, however, is the simplicity with which Garfield originally defined it. As is so often the case, strengths can also be weaknesses. Implicit in the journal impact factor and in the whole notion of citations is an unstated idea. By counting citations, one creates a closed loop. This metric is about how often published scholars are referred to by other published scholars.

THE STATE OF THE ART IS EVOLVING

Today, the complete audience for a biomedical scholar publishing a manuscript and the audience for a biomedical publication is not as clear. There was a day when scholars “spoke” with their publications to scholars, and there was some process—a wag would call it education—a kind of trickle-out effect that resulted in scientific information reaching the public at large.

Today, I would argue that we in biomedical scientific fields have not just 1 audience but many. How could our audience possibly continue to just be our peers of other scientists/clinicians/scholars or trainees given the way that electronic media disseminate information? There is the broader world of trained professionals such as nurse practitioners, physician assistants, nurses, clinical assistants, psychologists, podiatrists, and chiropractors. Additionally, your readership may contain the literate population who have little or no education in science (yet plenty of opinions) but have a robust appetite for knowledge about their own health and that of their friends, family, and community.

Our most earnest wish as scholars, authors, and professionals in the biomedical journalism field is that efforts to promote high-quality publications will result in a better-informed world, and that those who seek to be well informed, to be educated, and to strive for wisdom will be edified. We may also hope and wish that the production of forthright, honest, high-quality biomedical information will readily translate to opportunities to benevolently affect clinical decision making.

At some point, the availability and utilization of a published article by an enormous international universe of readers stopped looking like heresy. Before Gutenberg’s printing press, a tiny class of citizens had access to the printed word, paper was costly, and payment was required to view influential thoughts written down, so barriers existed to the movement of knowledge. Arguably, the past 50 years of paper-based publication was like the pre-Gutenberg era. Articles were in journals, journals were available to elite audiences who paid for subscriptions, and the public remained uninformed unless experts were consulted. Of late, some publications offer open access, meaning virtually anyone with an electronic device, an Internet connection, and a free search engine installed has a reasonable chance of locating, viewing, and potentially downloading for storage all open-access scientific information, thus eliminating some of the barriers previously in place. We have witnessed the democratization of information and knowledge.

Since the first iteration of the impact factor (in the 1960s), much has changed in how information moves. As news outlets provide coverage (air time) to important medical developments, the printing press is steadily diminishing as the prime technical means of information dissemination. One does not know the qualifications of visitors to an article published in cyberspace. Laypersons, students, trainees, and professionals alike all leave the same cyber footprints by indistinguishable browsing behavior. Cybermetrics do, however, give us clues as to whether articles are being downloaded and how long browsers linger.

In today’s transformed world of information movement, I and others are questioning how germane the journal impact factor is.

The journal impact factor, although in broad use to this day, has been extensively criticized.4-12 It has been rejected as a basis for evaluating research, critiqued as unreliable, and there have been calls for its abandonment and replacement. A recent blistering critique called out major issues with the journal impact factor, calling it “highly misleading” and “meaningless as a predictive measure.”13 Yet, the journal impact factor persists, perhaps because of institutional inertia, tradition, or lack of materially improved alternatives.

COMPLEXITY OF THE STATE OF THE ART

If the journal impact factor is outdated, aging, flawed, dated, or otherwise doubtful, one might expect that there are newer, better, more sophisticated methods by which biomedical scholars may reckon the way their work is attended to. Perhaps most disturbing of all is the likelihood that the impact factor has been important enough that it is now of diminishing, if not dubious, value because unscrupulous individuals, hell-bent on getting great numbers, subvert it by gaming. As examples, some publications demand that prospective authors cite other works previously published by them as a condition of acceptance. This may be the Information Age, but with electronic publishing came not only the opportunity for a revolution in how scientific information is propagated but, regrettably, the moment for substantial subversion. Junk science has begotten junk journals. Rapacious desire for scientific recognition and advancement has spawned abominations such as predatory journals, a sort of tabloid pseudojournalism, the twisted sister of legitimate science and scientific publication. In this perverse scheme, authors pay to publish and are required to cite the publication they are involved with (one gaming strategy), a pay-to-pass scheme designed to bump up impact factor scores. This strategy has led to the development of a list that one observer (a librarian) proposed as a catalog of dubious publications, but the list is no longer actively supported. If a new comprehensive metric of merit does gain wide acceptance, one can anticipate that gaming the system14,15 would grow more difficult.

TOWARD DEFINING AN IDEAL INDEXING METHOD

As mentioned, the old publishing world has been upended by the Information Age or, more specifically, transformed from a closed system to a vastly open system. As indexing methods have proliferated, and with the passing of a half century since the conception of the journal impact factor, a robust discussion about generating an ideal indexing method is due.

The following are a few ideas about potential dimensions of a publication’s scientific value, in the abstract as I see it, on the basis of decades in medicine as a clinician, author, and editor:

  • Novelty: Genuine discovery is a core component of science.
  • Applicability: Widely useful insights are of greater consequence than narrower ones.
  • Appreciability: Most readers—certainly the author’s peers—should be able to appreciate the value of a work. If no one understands a discovery, it may still be important, but it is destined to languish in obscurity.
  • Sustainability: Deep science ought not to be based on transient blips but rather on milestones and trend makers.
  • Game-resistant merit: Good science has no room for trickery, sophistry, and the foolishness of dressing up mediocre work in elegant guise.
  • Indexing of important work: Scientific work that is important should be indexed within and outside traditional publishing channels, including doctoral theses; meeting presentations; and conceivably lectures, blogs, and articles.
  • Replacements for impact factor: Multiple alternative metrics as potential replacements for the impact factor do already exist (Table 1). None, however, fully solves the dilemmas and opportunities presented by large-scale computing in the Information Age nor appears to concisely redefine parameters worthy of consideration.

18037

PROPOSAL FOR NEW UNIFIED MEASURE OF SCHOLARLY ACTIVITY: THE INFLUENCE FACTOR

Suppose one were to leap ahead conceptually to a moment when indexing becomes more inclusive. Rather than focusing exclusively on how often scholars cited a particular published work, let’s consider measuring the value of that work more broadly. How many people know about a published work (perhaps obtained through surveys) could be a measure. Another could be how many people are influenced by what an author has written. If some discovery is reported, how widely is the discovery discussed? Is it tweeted or cited in Facebook, and if tweeted, how often retweeted; if seen in Facebook, how often liked and shared? Does the material reach other media, such as radio or television? What markets and what size audience does each subsequent transfer to other media involve? There have long been commercial “clipping services” (focused on newspapers) and also media extraction services (focusing on television and radio activity). It is not too much of a leap of imagination to visualize an electronic indexing method in which all social media—Facebook, Twitter, and so on—as well as all print media and virtually all electronic media references to one’s work somehow were melded together into a comprehensive, huge index, combining literally every important mention of a work. Perhaps then one would begin to grasp the true outlines of how broad and how deep the meaning of a work is moving through the world. But, then another layer would need to be obtained for yet deeper importance. We might determine which works influence policy development or which works trigger legislation? Perhaps it would be important to know how many languages the new information enters. Another layer could be how durable the influence of the material is. For instance, does it trigger the formation of study groups or the development of derivative research, and perhaps most importantly of all, how durable is the core concept? Another thought to ponder is whether 1 small fact triggers a cascade of investigation. Or does 1 large study of necessity lead to something more fundamental than 1 small study, even if well done?

I suggest that in this Information Age, adding new, broad, current-day variables to the calculus of merit will more exactly allow measurement of the value of both individual manuscripts and the journals that offer them to the reading community. The development of a modern tool will reflect the reality of extremely broad audiences. If need be, each element might be weighted, and conceivably, institutions pondering scholarly merit might tune the weighting to local priorities.

Some acknowledgment of Web page views (“hits”) would be a start. Because some journals are paperless and exist virtually in cyberspace, their staff must know not only their “subscriber” base but also how often articles are viewed. Similarly, print journals that have a parallel Web-based version, know information about visitors to their articles. Although there are several methods for analyzing metrics on the Internet, an entire science could arguably be developed about patterns of medical article utilization on the Web. These patterns could include how quickly an article is recognized by, say, the first 1000 readers (or the first 100,000, 1 million, etc); how sustained interest is in the article, how rapidly interest falls off, and ultimately how durable interest is in a particular piece. These and other parameters are readily observed, measured, and recorded. How to interpret such data will make for an interesting discussion.

In today’s Internet world, an item that “goes viral” might reach millions overnight. In this context of nearly unlimited distribution of the written word in electronic form, is this not clearly the right moment to ponder what lies beyond impact factors? Surely authors wish to be well cited by other scholars as always. And, as has been the case for many decades, some of what finds its way past peer review and into the scientific literature will be picked up by the avid science monitoring and reporting industry and passed along to the enormous audience of science watchers. One need not hope for an individual article to go viral and reach tens of millions of viewers as an indicator of value. It seems apparent, however, that something that draws large numbers of views has tapped a reservoir of interest, thus establishing a new dimension of value.

In addition to cybermetrics, other elements are worthy of consideration. For instance, movement between media might be of interest—whether one’s work gets mentioned in newspapers, magazines, radio or television talk shows, or blogs. Most importantly, a metric should include evidence that one’s work begins to reach into policy decisions within universities, health organizations, the government, or the courts.

The time has come to move the useful concepts behind the journal impact factor into the Information Age. The technology exists to develop bold new methods. We as scholars in biomedicine write not only to inform the relatively small universe of fellow scholars. We write knowing that well-crafted research may also more or less easily reach biomedical trainees, policy makers, other media, and ultimately citizens across the globe. Accordingly, a new measure of the value of a publication (and of an author’s work) must incorporate a broader metric of the reach it spans than how often a work is cited. Such a measure also must include how many hits the publication generates among Web browsers, whether it generates discussion in blogs or reaches into other media, and if it influences policy making.

I propose a new concept that I will call the influence factor. An influence factor might be defined as the sum of scholarly citations plus some derivative of how many times the work is actually viewed (eg, hits on the online publishing site) plus a reflection of how widely the article moves to other media (eg, mass media, social media), and even ultimately how much influence it has on philosophical, conceptual, or policy matters.

For those fond of mathematical precision, influence factor could be defined thus:

INF = (a + b + c + d/1000 + P)

where INF = influence factor, a = number of scholarly citations, b = number of print media citations, c = number of nonprint citations (television, radio, podcasts), d = number of hits recorded in Google metrics for electronic-only or Web-reproduced publications, and P = substantive policy changes definitively linked to the work.

HOW TO GET FROM HERE TO THERE

Replacement of journal impact factor will not happen easily, automatically, or overnight. It is entrenched, is traditional, and has withstood prior calls for replacement. When it becomes increasingly clear that deficiencies are glaring and persistent, the time to reach for alternatives has begun, as I believe it has. The development by stakeholders of a vision of a better system would be a start. This might occur in government agencies (the National Institutes of Health, Department of Health and Human Services, and Centers for Medicare and Medicaid Services come to mind), or perhaps in the private sector or academia. There are surely forums in the health information technology sector where high-level policy issues can be addressed. The vision should incorporate a new and substantially better approach to credibly, fairly, and consistently evaluating the value of scholarly work. As described, an influence factor can be broader in scope, deeper in reach, fairer, and more credible than existing and historic methods. Once a vision is established and the opportunity for development opens, calls for submissions, such as requests for proposals, would bring into play productive and competitive forces in the health information technology sector and broader academic communities. The industry has failed to produce a definitive replacement to date, and, in fact, multiple competing services may make replacement of journal impact factor less likely rather than more likely.

The slide rule is an antique, the mimeograph machine is obsolete, the rotary dial phone is fading in the mists of time. It is time for the journal impact factor—a product of time past—to fade from the stage and make way for new, more robust means of assessing scholarly works. Improved indexing methods may serve to emphasize genuine scholarly scientific merit.

Disclosure Statement

The author(s) have no conflicts of interest to disclose.

Acknowledgments

Kathleen Louden, ELS, of Louden Health Communications performed a primary copy edit.

How to Cite this Article

Hogan R. Assessing impact of biomedical scholarship in the information age: Observations on the evolution of biomedical publishing and a proposal for a new metric. Perm J 2019;23:18.037. DOI: https://doi.org/10.7812/TPP/18.037

Author Affiliations

1 School of Medicine, University of California, San Diego

2 Emeritus, Southern California Permanente Medical Group, San Diego

Corresponding Author

Robert Hogan, MD (rwhogan@cox.net)

References
1. Garfield E. Citation indexes for science; a new dimension in documentation through association of ideas. Science 1955 Jul 15;122(3159):108-11.
 2. Garfield E. The history and meaning of the journal impact factor. JAMA 2006 Jan 4;295(1):90-3. DOI: https://doi.org/10.1001/jama.295.1.90.
 3. Garfield E. Journal impact factor: A brief review. CMAJ. 1999 Oct 19;161(8):979-80.
 4. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ 1997;314. DOI: https://doi.org/10.1136/bmj.314.7079.497.
 5. Neuberger J, Counsell C. Impact factors: Uses and abuses. Eur J Gastroenterol Hepatol 2002 Mar;14(3):209-11.
 6. Satyanarayana K, Sharma A. Impact factor: Time to move on. Indian J Med Res 2008 Jan;127(1):4-6.
 7. Chew M, Villanueva EV, Van Der Weyden MB. Life and times of the impact factor: Retrospective analysis of trends for seven medical journals (1994-2005) and their Editors’ views. J R Soc Med 2008 Mar;100(3):142-50.
 8. Andersen J, Belmont J, Cho CT. Journal impact factor in the era of expanding literature. J Microbiol Immunol Infect 2006 Dec;39(6):436-43.
 9. Falagas ME, Kouranos VD, Arencibia-Jorge R, Karageorgopoulos DE. Comparison of SCImago journal rank indicator with journal impact factor. FASEB J 2008 Aug;22(8):2623-8. DOI: https://doi.org/10.1096/fj.08-107938.
 10. Archambault É, Larivière V. History of the journal impact factor: Contingencies and consequences. Scientometrics 2009 Jun;79(3):635-49. DOI: https://doi.org/10.1007/s11192-007-2036-x.
 11. Greenwood DC. Reliability of journal impact factor rankings. BMC Med Res Methodol 2007;7:48.
 12. Howard J. Humanities journals confront identity crisis. Chronicle Higher Educ 2009;55(19):A1.
 13. Bohannon J. Hate journal impact factors? New study gives you one more reason [Internet]. Science Mag 2016 Jul 6 [cited 2019 Mar 7]. Available from: www.sciencemag.org/news/2016/07/hate-journal-impact-factors-new-study-gives-you-one-more-reason.
 14. Beall J. Best practices for scholarly authors in the age of predatory journals. Ann R Coll Surg Engl 2016 Feb;98(2):77-9. DOI: https://doi.org/10.1308/rcsann.2016.0056.
 15. Klyce W, Feller E. Junk science for sale: Sham journals proliferating online. R I Med J (2013). 2017 Jul 5;100(7):27-9.

Keywords: electronic publishing, Index Medicus, journalology, journal impact factor, journal indexing, MEDLINE, PubMed, scientometrics

Reprint Permissions

The Permanente Journal welcomes requests for reprints and reproduction. Use of any and all published materials is copyrighted and protected.

SUBSCRIPTION

Journal subscriptions for The Permanente Journal are entered for the calendar year. Advance payment in US dollars is required.

CIRCULATION

27,000 print readers per quarter, 15,350 eTOC readers, and in 2018, 2 million page views of TPJ articles in PubMed from a broad international readership.

Indexing

Indexed in MEDLINE, PubMed Central, HINARI, EMBASE, EBSCO Academic Search Complete, rdrb, CrossRef, and SciVerse/Scopus.


Click here to join the eTOC subscription list or text TPJ to 22828. You will receive an Email notice with the Table of Contents of The Permanente Journal.


                                             

 

 

ISSN 1552-5767 Copyright © 2019 thepermanentejournal.org.

All Rights Reserved.