Universiteit Leiden

nl en

Hybrid Intelligence: Making the unknown visible for Humans and AI

A consortium made up of Leiden University (Institute of Public Administration/Digitalisation & Public Policy, Bram Klievink, Sarah Giest, Bart Schermer), VU (Professor Fabio Massacci), TU Delft, TNO, and Thales has been awarded a NWO grant of 1.5 million euros. This research project looks into the ‘metadata of uncertainty’ in machine-readable and human-interpretable forms with the aim to find ways to responsibly apply Artificial Intelligence to create a safer society.

Professor of Digitalisation & Public Policy, Bram Klievink, explains: ‘In all kinds of analytical processes within the government, take threat intelligence for instance, humans and artificial intelligence are often collaborating in a "hybrid" process to obtain and process actionable intelligence. This comes with considerable uncertainties and biases. Think of models that aren’t perfect, or of certain data that cannot – or only partially - be shared because of operational or strategic reasons, or of intentionally or unintentionally misleading sources.’

Experts, such as analysts, are aware of these uncertainties and biases, but lack formal means to handle these uncertainties and the implications these have for their work. Klievink: ‘These uncertainties and biases are practically unavoidable, especially in situations in which data and insights travel across the boundaries of departments, organisations, and domains.'

With each link you lose information regarding choices

The project the consortium will be taking on, will develop ‘metadata uncertainty’ in machine-readable and human-interpretable forms and validates it empirically. Klievink: ‘With each of these steps, some machine processing is involved, as well as some human processing and expertise; that’s the hybrid character. And with each link, information about choices, limitation, uncertainties is lost. In this project we (The Hague Centre for Digital Governance) will collaborate with computer scientists on this "metadata uncertainty" which will enable you to explicitly reason about the trade-off between accuracy, proportionality, privacy, and cost-effectiveness in intelligence work.’

What do these innovations mean for the daily practice

Klievink: ‘Within the multidisciplinary field, we, as social behavioural scientists, are interested in the effects these innovations will have for professionals working in the field. What impact will this have for the intelligence professional and their processes. And what kind of demands will it put on the management of and what are effective organisational architectural choices with regards to data science and AI. Also in relation to all kinds of primary processes. The organising of these learning processes and the required feedback mechanisms, is a question that interests us at the Centre, and that becomes more complex when you are trying to learn within such a hybrid chain. Across the boundaries between men and machine and between organisations, the feedback chain is often much longer.’

According to Klievink, the combination of these two aspects, the formalising of the metadata that should continue to travel up hybrid intelligence chains and the understanding and supporting of that formalisation within that process, contribute to a responsible application of artificial intelligence for a better society.

This website uses cookies.