A UFS preocupa-se com a sua privacidade

A UFS poderá coletar informações básicas sobre a(s) visita(s) realizada(s) para aprimorar a experiência de navegação dos visitantes deste site, segundo o que estabelece a Política de Privacidade de Dados Pessoais. Ao utilizar este site, você concorda com a coleta e tratamento de seus dados pessoais por meio de formulários e cookies.

Ciente
Notícias

End of the road for h-index rankings

US chemists who have ranked living chemists based on their h-indices have decided to stop compiling the rankings. The decision comes after criticism that the list lends too much emphasis to a single metric for assessing academic performance and risks tempting assessors into simplistic decisions.

 

For the last five years, Henry ‘Fritz’ Schaefer and Amy Peterson at the University of Georgia in Athens, US, have compiled a ranked list of living chemists with h-indices of over 55. That list ispublished online by Chemistry World.

T he h-index was first described by physicist Jorge Hirsch in 2005 in an effort to overcome deficiencies in other bibliometric indices. A scientist's h-index is the greatest number of papers they have published that have each amassed at least that number of citations. For example, George Whitesides of Harvard University in Boston, US, who is currently the highest ranked living chemist with an h-index of 169, has published 169 papers that have each received at least 169 citations.

 

‘If I were to get a load of letters saying “please don’t stop”, I might reconsider’

However, Schaefer has decided that the latest update will be the last. Since the project began, Schaefer says, it has attracted fierce and ongoing criticism. ‘Although I thought it would pass, the criticism never seems to end.’ The critics’ ire is aimed partly at the metric itself, but mostly at Schaefer’s list. The act of compiling a ranking, they say, puts too much emphasis on the h-index as a single metric.

 

If not h, then what?

 

While Schaefer recognises this focusing of attention as a problem, he remains convinced that the h-index has a role to play as part of a broader approach to research assessment. Whitesides, who has been involved in many academic assessment efforts, agrees. The question of how we should assess science is a controversial one, he says, and one that people have historically not been good at answering. The concept of value is hard to define when related to science, he adds, and the answer you get depends largely on the question you ask. ‘The h-index has known biases (especially toward age); academy memberships and academic prizes are political and also age biased; citation indices value quantity; peer review is resistant to new or controversial ideas; money generated in grants has to do with the field in which you work; technology transfer is important in creation of jobs, but does not necessarily require new ideas; and “originality” is a beauty that is in the eye of the beholder,’ Whitesides says.

‘One virtue of the h-index is that it is dispassionate,’ he adds. Schaefer agrees, adding that its objectivity ‘levels the playing field’. But Richard Catlow from University College London, UK, who is chair of the chemistry panel for the forthcoming Research Excellence Framework (REF) assessment of UK academic outputs, emphasises that the h-index is only one piece of ‘crude, but useful, information’. He adds that it cannot be ignored, but needs to be combined with other measures, feeding into expert judgement. ‘I would hate it if we moved to a system where appointments or promotions were driven by metrics. That, to me, would mean that we had lost confidence in our own expert judgement.’

Catlow also emphasises that h-indices and similar metrics like journal impact factors will not be part of the REF assessment – partly because the REF focuses on research outputs rather than individuals. ‘Citation data on the outputs submitted will be available to panel members, and that may be used – if the panel members wish – to assist them in their assessment, but this is in no way a substitute for their expert peer review.’

So what is the value of the h-index as a metric? Schaefer says that the best way to view it is in broad brush strokes, taking into account age. ‘For an established researcher, there’s no doubt that having an h-index of over 100 is excellent,’ he says, ‘but likewise, at 30 years old, an h-index of 30 is terrific.’ However, he agrees that trying to distinguish between two researchers with only a couple of points difference is largely meaningless.

He also notes that it is perfectly possible to have a huge impact in a field without amassing a large h-index. ‘Albert Einstein has an h-index of about six, for example, and there are several Nobel laureates with h-indices of less than 55, who therefore don’t feature on the rankings.’

 

Ranking the world's top chemists by h-index. The difference between rankings may only be a couple of h-index points

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Much pain for little gain

 

Aside from the criticism, Schaefer wants to stop for another reason. Compiling the indices takes a significant amount of effort to ensure the values are accurate. ‘The process cannot currently be automated,’ Schaefer says. This is largely because of deficiencies in citation databases, where often papers by multiple scientists with the same names are conflated together, distorting the values. ‘The greatest example of this was when ISI listed the most highly cited chemists, and “K Tanaka” came out on top.’ This, Schaefer contends, did not represent any single researcher, but rather the accumulated output of many chemists sharing a common name. Conversely, there are researchers who have published under different names, or flitted around diverse fields of research, whose h-indices could be underestimated.

In compiling their rankings, Schaefer and Peterson have manually combed the databases in an attempt to match up papers to authors and ensure the accuracy of their dataset. They have also been in contact with researchers to clarify changes of name, address, or research fields. Even then, Schaefer admits, in some cases it is impossible to be 100% certain. There are projects ongoing to try and address these issues, including Orcid (Open Researcher and Contributor Identifier) and Thomson Reuters’ ResearcherID. However, these programmes are voluntary and would require researchers to register and then associate all their previous papers with their identifier.

For Schaefer, the end no longer justifies the effort and disparagement. ‘It’s a lot of work for a project plagued by critics,’ he says. ‘However, if I were to get a load of letters saying “please don’t stop”, I might reconsider.’

Notícia cadastrada em: 16/01/2013 09:32
SIGAA | Superintendência de Tecnologia da Informação/UFS - - | Copyright © 2009-2024 - UFRN - bigua3.bigua3 v3.5.16 -r19100-31f4b06bdf