
Stephan Raaijmakers: ‘Everyone within Humanities can contribute to the study of AI’
Stephan Raaijmakers has been Professor of Communicative AI since 1 May. Prior to this, he had held this position for five years as professor by special appointment. How has his approach to AI changed in that time?
‘Communicative AI may sound a bit vague,’ says Raaijmakers, ‘but at its core the field deals with two aspects of AI. The first looks at questions about how AI language models learn language and how humans can communicate as optimally as possible with such models. There are a variety of scientific issues at play here. Can you think of language models as linguistic theories or only as technological artifacts? How do you evaluate their performance in a way that is scientifically sound? What role can language models play in linguistic research? And: what new communication skills do the users of language models need to develop?’
The second point deals with the technical aspects of AI and focuses on the underlying algorithms and how to understand and influence them. When and how should you limit them and how can you detect that they are making up information, exhibiting toxic behaviour or hallucinating?
Changes
When Raaijmakers started his career, the average person knew hardly anything about artificial intelligence. That has changed now. 'You can't really avoid AI anymore,' he says. 'ChatGPT was really like the proverbial stone spreading ripples in the pond. The emergence of that AI assistant caused us to get into a slipstream where Big Tech started developing several major language models.’
The latest development is that AI is now starting to seep into our everyday products, such as mail, Office and all kinds of mobile applications. Raaijmakers: 'We don't have a good view of these modeling a huge boom in language models in the public domain. These are versions of language models that have been fine-tuned or otherwise modified by third parties. The vast majority of the models go back to Big Tech-distributed "public domain" versions of their commercial models. For science, this in itself is not unfavourable - at least now we can tinker with and study the models ourselves.’
Dependence
Models like ChatGPT, Meta's Llama and Google's Gemini are presented as tools that would make information more accessible, but according to Raaijmakers, we have become too dependent on them. 'The quest for sovereignty and reducing the autonomy of Big Tech in AI is an important development within the geopolitical field. Developments like GPT-NL, a national initiative to create a Dutch GPT model, are then a step in the right direction.'
Energy
Besides doing research, Raaijmakers is also looking forward to teaching. 'I enjoy being in front of students. AI is a field that you can approach from many different angles. Students all have different backgrounds, so they bring something new to the table. One of my goals is that I would like Leiden to become a place where we can study the different aspects of AI - be it the linguistic or technological side - in such a way that everyone can contribute to it, regardless of whether they come from the sciences or the humanities. It’s important to understand that AI is not just a technical party. Actually, with the advent of language models that can code very well, in-depth technical knowledge is becoming somewhat less urgent. Knowledge of effective human-machine communication and critical thinking, on the other hand, are all the more relevant. Leiden Humanities has a long tradition as a place of world-class linguistic knowledge, so there’s an excellent opportunity here for us. We are working hard to implement the link between language and AI, including through our curricula and cooperation with the various institutes in the humanities.’