Next Forum

In the fortieth number of Antropologicheskij forum / Forum for Anthropology and Culture, published by the Museum of Anthropology and Ethnography 
(Kunstkamera) of the Russian Academy of Sciences, the European University, St Petersburg, and the European Humanities Research Centre, University of Oxford, our ‘Forum’ (written round-table) will address the question of bibliometrics and its relation to the assessment of scholarly quality. We would like to invite you to respond to the questionnaire below.

You may, as you wish, directly address the questions presented here, or send in a text responding to one or some of them (or taking up some other issue that seems to you relevant). Whichever way, we would be grateful if you could keep your answers to a maximum of 10 pp. (1.5 spaced, 12-point type). Please use the author-date in-text citation system for any references in the format [Smith 2002: 12], i.e. author/date (no comma) in square brackets, appending a list of ‘References’ at the end with full publication details: Author: e.g. Smith M. A.; Article title: e.g. ‘Visual Anthropology’; Journal title: e.g. Ethnology. 2002. No. 3. Pp. 14–19; or alternatively, Author: e.g. Smith M. A.; Book title: e.g. Visual Anthropology. Place: Publisher, date, pages: e.g. London: Anvil Press, 2002. 356 pp. Please send replies by 31 December 2018 to forum.for.anthropology(), with a copy to catriona.kelly(); your email address should be included in any attached file. We hope that the discussion will appear in the spring of 2019.


Forum 40: Applied Bibliometrics

‘If the stars shine, someone must need them, eh?’ as the poet Mayakovsky put it in ‘Listen to This!’ But does the boom in bibliometrics over the past 10–15 years actually mean it is essential? And if so, then for whom and how?

The Oxford English Dictionary tells us that bibliometrics is ‘the branch of library science concerned with the application of mathematical and statistical analysis to bibliography; the statistical analysis of books, articles, or other publications’, while ‘scientometrics’ is ‘the branch of information science concerned with the application of bibliometrics to the study of the spread of scientific ideas’. Most of us, however, are familiar with bibliometrics not as an academic discipline in its own right, but from the annoying requirement constantly to enter publications into university or grant agency databases (such as ResearchFish in the UK), from discussions about open access and pressure to demonstrate ‘impact’, exhortations to ‘act on publication’, and so on. Many of us are also familiar with sites such as and researchgate, which as well as acting as portals for academic publications, also offer statistics of download and (in the case of even rank researchers (‘top 3 per cent’ etc.) The old adage, ‘publish or perish’ has been joined by another, ‘publish in the right way or perish’. ‘In the right way’ can refer in some contexts to research monographs or refereed articles, but for academic administrators it can also refer to journals that are presented by services such as Web of Science and Scopus as having a high impact factor.

We would not wish to slight bibliometrics in the original sense of a branch of scholarly librarianship, but what we term here ‘applied bibliometrics’ (AB) is a very different animal. What explains its extraordinary proliferation? Which problems does it address? Who needs it and why?

Inviting as it might seem to speculate on the likely financial benefits that may be extracted from journals, institutions, and individual academics by bibliometric services such as WoS (an arm of Thomson Reuters) and Scopus (a subsidiary of Elsevier) and the publishing conglomerates that operate them, crude conspiracy theories are vitiated by the appeal of these indexes to university administrators, funding bodies, and government departments.

Obviously, all publishers are looking at new ways of generating income streams given the threat to traditional revenue represented by online publication, but the lively market for statistical measurement is not just driven by commercial considerations.  There are, evidently, other factors at stake.  Among these is the issue of academic visibility. In the past, publication was really not all that important, its ‘impact’ still less so. Indeed, the attitude tended to be that scholarship should precisely not be accessible to a wide audience. It was perfectly possible for academics to devote themselves entirely to teaching, or on the other hand to topics such as ‘The Economic Influence of the Developments in Shipbuilding Techniques, 1450 to 1485’, as so memorably ironised in Kingsley Amis’s 1954 novel, Lucky Jim.

Seen from this perspective, the central aim of AB is incentivising the production of internationally significant, widely-read research publications. Indexes act (supposedly) as a form of quality control. They make it harder to appoint a person just because he or she happens to be the most brilliant ex-student of someone on the search committee; that is, they act as a counterweight to patronage by offering ‘objective’ evidence of performance. By extension, they enhance the prestige of particular research groups and institutions (the proliferation of ‘best in world’ rankings for universities goes in step with academic rankings as such).

On the other hand, the question of whether citation indexes actually achieve the proclaimed goal of measuring and thus enhancing quality is a matter of intensive debate, particularly in the humanities. How well do they actually work? In what contexts are they used, and what are the pluses and pitfalls? Are they fit for purpose, whether as an instrument in their own right, or as part of a broader system of assessment (and what should the other elements in that be)? These are the considerations that led us to organise this discussion, and to evolve the questions below.

1. Please describe your own experience of applied bibliometrics. In your view, what are the positive features of the system? How reasonable and realistic are its aims and how effectively does it achieve this? Does it have any negative or harmful effects, and if so, which?

2. What are the relations between AB and the criteria of merit used by journals and publishing houses, selection criteria in universities and scientific institutes, learned societies, and so on? What impact does AB have in the world of grant funding (e.g. proliferation of particular research fields, prominence of certain research centres, etc.)?

3. Do you have any views on how the application of AB may vary in different countries? If so, are there any particular academic cultures where the system seems to function particularly well (or badly)?

4. Would you say that AB is, all in all, actually a useful and necessary instrument of quality assurance in the academic world? What are the best ways of making sure that it does not become a problem in its own right as well as (or instead of) a way of addressing perceived defects?

Thank you very much!


P. S. We consider it only fair to warn you that contributions to the ‘Forum’ are not treated by Scopus and WoS as academic articles and not listed by them.

Editorial Office of the journal 'Forum for Anthropology and Culture':

Peter the Great Museum of Anthropology and Ethnography (Kunstkamera), Russian Academy of Sciences

3 Universitetskaya Emb., 199034, St Petersburg, Russia

European University at St Petersburg

6/1А Gagarinskaya Str., 191187, St Petersburg, Russia

Phone: +7 (812) 386-76-36

Fax: +7 (812) 386-76-39

E-mail: forum.for.anthropology()

Forum for Anthropology and Culture:

ISSN 1815-8927

Antropologicheskij forum:

ISSN 1815-8870

Mass Media Registration certificate PI No. FS77-35818, issued by the Federal Service for Supervision of Communications, Information Technology and Mass Media