Carta Acesso aberto Revisado por pares

Measuring academic productivity: don’t drop your ‘h’s!*

2011; Wiley; Volume: 66; Issue: 10 Linguagem: Inglês

10.1111/j.1365-2044.2011.06882.x

ISSN

1365-2044

Autores

Jaideep J. Pandit,

Tópico(s)

Aortic aneurysm repair treatments

Resumo

In this issue of Anaesthesia, the work of Pagel and Hudetz on ‘scholarly productivity’ of US anaesthesiologists, using the ‘h-score’ [1], coincides with an article using the same measure for UK anaesthetists [2]. Measuring clinical productivity is relatively easy: anaesthetists are locatable to the operating theatre (or pain clinic, labour ward or intensive care unit). Notwithstanding quality outputs such as pain scores, satisfaction, etc., the clear time-based metric underpins UK consultant contracts (‘programmed activities’) and facilitates workload calculations [3]. Time-based analyses can assess shortfalls in staffing in anaesthetic departments [4], and measure efficiency [5] and productivity across specialities [6, 7]. Time-based metrics are irrelevant in academia. The outcome of any scientific investigation is uncertain and quality, not volume, is what matters. Ludwig Wittgenstein did not over-exert himself, writing only one 28-page book –but what a book [8]! His (posthumous) citations strain electronic search engines, with citation rates > 500 per year (see http://www.harzing.com/pop.htm). Academic activity is diverse, and includes raising grant income, teaching, refereeing, editing, lecturing, administration, etc, (Scheid et al.’s list covers three pages [9]) and the output of each is ideally measured separately. Research income is often favoured as a primary metric, since only ‘good’ research can attract funding. But, as a Royal Society itself pointed out (http://www.rse.org.uk/govt_responses/2006/rae.pdf), it is easier to be funded once already well-funded (resulting in a self-propagating rather than a selective system), and there is a further danger that economic metrics reward profligate, not excellent, research. There is potential for bias against fields like mathematics, which sometimes needs little funding other than a pencil and paper. Publishing is generally agreed to be important, certainly by the Research Assessment Exercise (RAE), and its successor, the Research Excellence Framework (REF; see http://www.hefce.ac.uk/research/ref/), whose assessments are used by the Higher Education Funding Councils to determine block grant allocation to universities. This is where the h-scores used by Pagel and Hudetz come in. Just as it is important to understand schemes such as Payment-by-Results to appreciate how (in large part) NHS hospitals are funded [10], it is necessary to understand something of publication metrics to understand how universities (which work increasingly in partnership with the NHS) are funded. The simple number of publications gives no indication of quality or impact. The citation score, however, properly reflects the fact that articles central to an important and active field of enquiry will be cited more than irrelevant pieces in obscure fields. However, citation score does not distinguish between authors who have published a single, highly cited work (e.g. Wittgenstein) vs those who publish many, less-cited articles. The h-score is obtained by ranking an author’s articles from highest- to lowest-cited; the h-score signifies that h number of articles have been cited > h times (Table 1). The h-score cannot exceed the total number of articles published (so Wittgenstein’s h-score will always be 1), but the lowest possible h-score can be zero (i.e. the author is never cited). However, one problem is that authors can have the same h-score, but one author may have greater citation counts for those articles above the h-threshold (Table 1). This prompted the e-index, which takes into account the citations above the h-threshold (which in Table 1 would favour Jones over Smith [11]). Another limitation is that h-score does not take into account multiple authorship (an hm-index can adjust for this by dividing the h-score by the number of authors). If Jones always published with 20 co-authors (Table 1) while Smith worked alone, then our views of their relative productivities may change again. Moppett and Hardman also reported a ‘g-index’ (there also exists a g1-index to compare groups of researchers) [2]. Suffice to say herein that this ‘alphabet soup’ of metrics reflects the fact that measuring academic productivity (a) is not straightforward, and (b) is taken very seriously indeed by analysts, with the science of ‘bibliometrics’ emerging as a distinct field in its own right [12]. I will not discuss the relative merits of the various measures, but instead focus on what is relevant for anaesthesia. Moppet and Hardman [2] addressed the claim of the Royal College of Anaesthetists’ Academic Strategy (Pandit) Report, that “anaesthetic departments have performed poorly [and] their output is published generally in low impact factor, specialist journals” [13, 14]. They also wished to investigate if, as previously predicted, UK publishing in anaesthesia might disappear by ∼2017 [15, 16]. Pagel and Hudetz wished to provide a metric to assess the impact of academic strategies [1]. In contrast to the Pandit Report’s pessimism, the publication metrics for UK anaesthetists in fact appear within the range of US counterparts (and of several European and Canadian anaesthetic research units and of fields like radiology, urology and neurosurgery) [1, 2]. Reassuring perhaps, but the data need closer inspection. There appear to be just 23 ‘academic units’ in the UK, some of which are NHS centres not affiliated to any of the 31 medical schools (see http://www.medschools.ac.uk/Students/UKMedicalSchools/Pages/UK-Medical-Schools-A-Z.aspx), so over a third of UK medical schools have no anaesthetic department. Of these 23 units, a third are virtually ‘one-man bands’. There are just 104 UK research-active anaesthetists, concentrated mainly in 13 centres [2]. There appear to be 40 UK ‘professors’ [2] but several are basic scientists. The age distribution of senior academics would have been relevant: since submission of these articles I know of one professorial retirement, while my own department (Oxford) no longer has any clinical professors. There are probably therefore only about two dozen active UK clinical professors of anaesthesia and this number is likely to shrink rapidly. By contrast, there are 132 academic departments in the US with an estimated total academic staffing of ∼8000 academic anaesthesiologists (excluding basic scientists) [1]. Academics form ∼23% of US staff (there are ∼35 000 US anaesthesiologists; see http://www.ifna-int.org/ifna/page?25) but only < 2% of UK staff (there are ∼6500 UK consultants; see: http://www.rcoa.ac.uk/docs/Censusreport-final.pdf). The data of Pagel and Hudetz therefore represent just a small sample of ∼3000 associate and full professors in the US. Publication indices rely not only on articles’ being published and read, but also on their being cited. This requires a critical mass of authors, who cite each others’ work. In a circular fashion, this raises the citation metrics of the relevant journals, and of the authors in the field. If only a few researchers are writing articles in little-read journals, there is likely to be a natural upper limit to the overall publication metrics that are physically possible. Figure 1 shows how even publishing very actively indeed over a sustained period of time can yield very low, speciality-specific publication metrics [17]. The activity of anaesthesia researchers is not the limiting factor. Rather, it is a dearth of other research-active anaesthetists that limits citation scores. (a) Cumulative publications of three (randomly selected) UK senior anaesthetists (A–C, solid lines) and three similarly senior clinical researchers from medical specialties (1–3, dashed lines). (b) Mean citations per year of the same researchers. Note that citations can be much higher despite publishing at the same rate (i.e. the period before ∼2002 for all researchers) or even lower publishing rates (see researcher 1). Both sets of authors [1, 2] analysed their data by academic rank to assess the metrics required for appointment to rank of ‘professor’. Interestingly, h-scores overlap greatly and it is frankly remarkable that any professor can have an h-score as low as zero (in the US [1]) or seven (in the UK [2]). Collectively, the data suggest an h-index of ∼10–15 enters the professorial interquartile range. Unfortunately, neither set of authors investigated (as would have been possible) the publication metrics at the time of appointment to a professorial title (which presumably would be much lower than ∼10–15). Both articles imply that such international, anaesthesia-specific h-scores might inform the award of professorial titles, but in the UK the notion of ‘professor’ has changed greatly. Traditionally, the professorial title in UK universities was reserved for the (single) head of an academic department. All other faculty were just ‘(senior) lecturers’ or occasionally ‘readers’. Departments were recognisable by subjects (e.g. Chemistry, Physics, etc.) and Anaesthesia was well-represented before ∼1990. From the 1980s, universities merged departments strategically to improve their RAE scores (and thus maximise government grant support). Thus, a Department of History might be merged with Psychology to create, say, an altogether more impressive ‘Human Sciences Division’. Mergers also saved costs as fewer professors (and managers) were needed. This process identified the ‘weak departments’ and on these new metrics, anaesthetic departments either disappeared, or were merged (e.g. a joint ‘Department of Surgery, Therapeutics and Anaesthesia’), or retained as smaller units within larger divisional structures (e.g. ‘Division of Surgical Science’). Professorial anaesthetic posts were invariably lost [13]. Some universities strategically appointed basic scientists to clinical chairs, as a means of bolstering that department’s metrics (i.e. full-time scientists devoting 100% of their time to research understandably yield higher publication metrics than clinicians devoting only ∼50% of their time to research) [17]. It is interesting that Pagel and Hudetz report that in the US, only clinicians can chair clinical academic departments [1]. Today, disciplines in the UK cannot be so readily identified by the name of the host department. Rather, collaborative and multidisciplinary work means that many researchers nominally attached to, say, a Department of Biology might in fact be engineers working in Biotechnology. Cross-disciplinary grants mean that many Principal Investigators (PIs) manage larger budgets than those of traditional ‘departments’. Universities increased the number of ‘titular professorships’ to reward these PIs (who are now by far the main recipients of university professorial titles). Therefore, UK anaesthetists may not be able to rely upon the results of Pagel and Hudetz, and Moppet and Hardman, to assist in academic promotion. The first lesson is for anaesthetic organisations and concerns the issue of ‘professorships’ implicit in the articles. The things at which anaesthetists excel (e.g. delivery of a high-quality clinical service, clinical research, audit, teaching and training) are no longer valued by most universities (whose main interests are now explicitly publication metrics and grant income). Rarely PIs of standing to others, and with lower speciality-specific h-scores, anaesthetists will infrequently be awarded UK university professorial titles. If we collectively view this lack of titular recognition of our activities as a problem (in that it adversely affects the image of the speciality in the eyes of the wider public or of politicians), then there exists a simple solution. We could confer the honorific awards ourselves (through our royal colleges, faculties, associations, or specialist societies) recognising achievements that we, rather than others, value. The data offered in these articles provide international, speciality-specific criteria that could contribute to assessments for any such awards. In addition, this approach would help ensure titular recognition of anaesthetists based in non-university departments, whom Moppet and Hardman identified as making an important contribution to UK academic activity [2]. The second lesson is for all active UK academics. If there are just 104 (the h-scores indicate only ∼70 of these are truly active) then an essential means of survival is through greater collaborations – to include not just research projects [18–20], but also targeted and strategic senior career development [21]. The alternative to collaboration is competition. Large populations withstand vigorous competition, but it is well established that as a population’s size declines, the risk of total extinction increases [22]. The final lesson is for all readers of this journal. Reading adds to continuing professional development and to ‘education’ in the widest sense. Unfortunately, reading alone does little to help us as a journal, or us to survive as an academic speciality, as it does not contribute to publication metrics. What really matters is that readers – both clinical and academic – take up pen, put to paper and write to our journals. Whether it is correspondence, or a follow-up study inspired by the main article, it is writing and citing work by anaesthetists that will raise our collective profile (Fig. 1). Anaesthesia needs much more than an active readership; it also needs a large and active authorship. I am an Editor of Anaesthesia; I sit on the Research Council of the National Institute of Academic Anaesthesia, and I am Scientific Officer of the Difficult Airway Society. No financial support or other competing interests declared.

Referência(s)
Altmetric
PlumX