Lecture
SAILS Lunch Seminar
- Date
- Monday 30 May 2022
- Time
- Address
- Online only
Peter Verhaar is Digital Scholarship Librarian and University Lecturer at the Leiden University Centre for the Arts in Society.
AI and Humanistic thinking
As is the case in virtually all academic disciplines, humanities scholars are increasingly trying to harness the manifold possibilities associated with AI. The emergence of tools and algorithms based on machine learning and deep learning have pushed researchers to experiment with data-rich approaches which can help to expose properties of cultural and historical objects they could never observe before, moving beyond the ‘human bandwith’. The transition from mere data creation to actual analysis continues to pose challenges, however. In this presentation I want to discuss two central caveats that need to be taken into account by humanities scholars who aim to work with methods based on AI, and who aim to integrate the outcomes of these methods into their research.
A first important challenge can be created by a lack of explainability of such results. Existing AI algorithms tend to focus first and foremost on the development of models for the classification of specific objects, and the logic underlying such models often receives much less attention. The type of learning that is implemented within deep learning algorithms also differs quite fundamentally from the ways in which humanities scholars have produced knowledge traditionally. During recent years, a number of techniques have been developed, fortunately, to clarify the steps that are followed by algorithms during the generation of predictions and classifications. Such techniques to enhance the explainability of AI algorithms can ultimately help to reconcile methodologies based on AI with the more conventional forms of humanistic thinking.
A second challenge results from the fact that the data sets that are used as training data are often biased. Whereas humanities scholars typically aim to contextualise and to explain events, objects and phenomena by considering these from many different perspectives, the ‘ground truth’ that is used to train models usually reflects one perspective only. It is clear that such biased datasets can have important ramifications for marginalised communities and that they may reinscribe existing social and political inequalities.
Join us!
The SAILS Lunch Time Seminar is an online event, but it is not publicly accessible in real-time. If you would like to join this seminar, please send an email to sails@liacs.leidenuniv.nl to receive a link.