FORUM FOR ANTHROPOLOGY AND CULTURE

ANTROPOLOGICHESKIJ FORUM
RUS | ENG

Next Forum

In the sixtieth number of Antropologicheskij forum / Forum for Anthropology and Culture, published by the Museum of Anthropology and Ethnography of the Russian Academy of Sciences (Kunstkamera), the European University at St Petersburg, our ‘Forum’ (written round-table) will address the subject of AI in the social sciences and humanities. We would like to invite you to respond to the questionnaire below.

You may, as you wish, directly address the questions presented here, or send in a text responding to one or some of them (or taking up some other issue that seems to you relevant). Whichever way, we would be grateful if you could keep your answers to a maximum of 10 pp. (1.5 spaced, 12-point type). Please use the author-date in-text citation system for any references in the format [Smith 2002: 12], i.e. author/date (no comma) in square brackets, appending a list of ‘References’ at the end with full publication details: Author: e.g. Smith M. A.; Article title: e.g. ‘Visual Anthropology’; Journal title: e.g. Ethnology, 2002, no. 3, pp. 14–19; or alternatively, Author: e.g. Smith M. A.; Book title: e.g. Visual Anthropology. Place: Publisher, date, pages: e.g. London: Anvil Press, 2002, 356 pp. Please send replies by 15 January 2024 to forum.for.anthropology@gmail.com, with a copy to ck616@cam.ac.uk; your email address should be included in any attached file. We hope that the discussion will appear in March 2024.

 

Forum 60: Ai in the Social Sciences and Humanities

AI and neural networks have become fixtures in our daily lives. We use them for purchases, business transactions, for medical advice, when we’re speaking to call centres, using biometric authentication, or translating texts from another language, or simply for entertainment.

Neural networks are everywhere — including places where their use is undesirable, or indeed unjustifiable or borderline unlawful. And if in the world of creative arts the rising presence of AI has generated rows and industrial action, in the world of social sciences and humanities, things, so far, are quieter. They certainly do have a background role, however, in digital humanities, for the analysis of big textual corpuses by specialists in literature and the history of art, or in discourse analysis. AI allows work with data to extend well beyond simple keyword searches or formulaic calculations, as discussed in a recent study enumerating, for example, quali-quantitative analysis, historical modelling, and the identification of repeats and echoes in works of art [Gefen, Saint-Raymond, Venturini 2021].

It is such use of neural networks in social science and humanities on which we seek to focus here. Perhaps you have yourself processed data using AI, or indeed had to check whether an essay submitted by a student was composed with the help of AI; possibly you have used AI-generated pictures or diagrams in a presentation. Even if you haven’t done any of these things, you are unlikely to escape the impact of AI in the very near future, and it is time to ponder the implications. We hope that the questions below will work as a stimulus.

1. What are the applications of neural networks in academic life that you have heard or read about? If you personally have had contact with such applications in your professional life (whether as a field of study or to practical ends), then in which ways? What would be your predictions about likely further uses in the near future?

2. How useful do you consider AI in academic research and analysis? Have you yourself employed it for, say, putting together bibliographies? What is the likely impact of resources such as ChatGPT that can produce superficially plausible texts, but work entirely on the principle of compilation, without necessary reference to facts as such?

3. What, in your view, are the pluses and minuses of AI in the world of education? Have you encountered cases where you knew, or suspected, that a student had submitted work that was actually written by a resource such as ChatGPT? In what ways is it possible to identify such work? What are the methods by which we might effectively combat this new pedagogical challenge in a general sense, not just on a case-by-case basis?

4. What are your views on the ethical implications of AI use in academic work and teaching? Does having recourse to neural networks create additional problems in terms of identifying authorship?[1] What are the implications for intellectual property rights where the products of neural networks are concerned? Should there be limits on the use of AI, and if so, of what order?

Many thanks!

 

References

Gefen A., Saint-Raymond L., Venturini T., 'AI for Digital Humanities and Computational Social Sciences', Braunschweig B., Ghallab M. (eds.), Reflections on Artificial Intelligence for Humanity. Cham: Springer, 2021, pp. 191–202. (Lecture Notes in Computer Science, vol. 12600). doi: 10.1007/978-3-030-69128-8_12.



[1] Anthropologists, of course, have long grappled with the issue of authorship/intellectual property rights, particularly in the case of data acquired in ‘the field’, and especially since the entire relationship between researchers and informants began coming under scrutiny.

 

 

Editorial Office of the journal 'Forum for Anthropology and Culture':

European University at St Petersburg

6/1А Gagarinskaya Str., 191187, St Petersburg, Russia

Phone: +7 (812) 386-76-36

Fax: +7 (812) 386-76-39

E-mail: forum.for.anthropology()gmail.com



Forum for Anthropology and Culture:

ISSN 1815-8927


Antropologicheskij forum:

ISSN 1815-8870


Mass Media Registration certificate PI No. FS77-35818, issued by the Federal Service for Supervision of Communications, Information Technology and Mass Media

EUROPEAN HUMANITIES RESEARCH CENTRE