stem cells in their environment

Impact Factor vs Impact Statement

09 July 2014
By Kif Liakath-Ali and Christine Weber

In recent months we have heard a lot about impact factors and scandalous publications. In a way, these two are interrelated. It is obviously high time to consider re-thinking the impact factor-oriented science publication practice. Very recently we had a fantastic opportunity, here in our lab at King's College London, to discuss this topic over lunch with Professor Randy Schekman (University of California, Berkeley). Randy is equally famous for his 2013 Nobel prize-winning science of vesicle trafficking and his radical movement against the damaging influence of impact factors.

While we discussed many topics that unavoidably revolve around impact factors, most of us wanted to know what other alternatives there actually are. How can academic merit be judged without a form of metrics? The approach Randy suggests is an impact statement. An aspiring young researcher would submit a job application with, say, a 250 word statement on what he or she had achieved so far. Based on this and other associated criteria, such as academic references, recruiters could assess the scholarship and try to ignore where the work was published. But who will choose to do this? As Randy himself admits, the recruiters prefer simplicity in the selection procedure; for instance, to assess the candidates by a simple number or rank, which is perfectly represented by the impact factor of the journals they publish in. He argues that the rank of the journal shouldn't be used to judge someone's level of scholarship.

wattlab members with Professor Randy Schekman (University of California, Berkeley)After all, a journal's impact factor is a calculated measure that reflects the overall influence on the scientific community of all papers published in a given journal, as expressed in the number of citations they receive. This obviously doesn't say much about the quality of the individual publication or researcher. Some papers might even propose ideas, which have not been accepted or verified by the scientific community, and therefore continue to polarize – which, ironically, can also result in an increase in citations. Aiming to publish in a small number of so-called luxury journals has led to immense competition in the field and it is slowly changing the way scientific research is conducted. This might lead us to focus too much on projects which are more easily accepted by these journals instead of turning our attention to the scientific questions we actually want to answer.

During our lunch with Randy we also discussed the scenario of impact factors in the non-biological science fields, such as physical science, where researchers hardly recognise or know impact factors. Is it because these fields are not as competitive as the biosciences or because the process of publication doesn't involve vigorous peer review and require multiple rounds of experiments to prove their data? We surely can rule out the latter point, but it is indeed interesting that some scientific fields do seem to have their own way of undertaking quality assessment. Perhaps there is still room to take different perspectives into account (we are happy to hear from any physicists or other non-biological scientists if they should happen to read this blog!). And perhaps this is also a process that matures and evolves over time.

Publishing is one of the oldest traditions in scientific research. Originally, in earlier days, it was just seen as a by-product of research. Not all early ground-breaking discoveries had to be published in high impact journals to be acknowledged and accepted. Scientists in those days obviously didn't know about impact factors, they relied on the presentation of solid evidence and assessed the quality of the study as it was presented. Granted, there weren't as many scientists during those days as there are today. There was less interest in research as the general public had hardly any access to education and the scientific profession was restricted to selected, privileged elites, who had the means and leisure to indulge in it. Socrates, the ancient philosopher and all-rounder, didn't even have to write anything down to become famous for his ideas. Aristotle, in turn, assembled a whole library and wrote many of the books himself and never had any need for a luxury journal to attract attention. There is a notion that truly valuable and important ideas will make their way into society if they are needed. We do not believe that it should be important where they were originally published as long as they get out there.

Today, inevitably, impact factors have become part of our research culture as a means to distinguish between the wealth of scientific data that is produced. But in addition to providing some order to the chaos and highlighting valuable work, they have started to distort the way we interpret and assess scientific results and, more importantly, how we rate the scientists they originate from. It makes perfect sense for a journal to develop a way to measure its own reach and influence within the community - a sort of 'business evaluation'. And perhaps that's what scientists should do as well; not rely too much on the journal's evaluation but instead create a method to measure their own reach and influence in the field.

The idea of an impact statement that replaces the impact factor in recruitment, as suggested by Randy, seems practical and promising. But we, as the current generation of researchers, need more evidence that this model can be applied successfully in practice. It is, after all, our career that depends on getting a voice and finding a niche in today's large scientific community. Any improvement that can ensure that work of good quality is adequately recognised will most certainly be embraced by every scientist who is in this profession "in search of the truth" (and not in search of the easy spotlight which might also apply for some). But we don't expect major changes to happen quickly.

comments powered by Disqus