Universiteit Leiden

nl en

Will AI be listening in on your future job interview? On law, technology and privacy

The law and Artificial Intelligence (AI) applications need to be better aligned to ensure our personal data and privacy are protected. PhD candidate Andreas Häuselmann can see opportunities with AI, but dangers if this does not happen.

Imagine you apply for a job and are rejected because you do not want it enough. Later you discover that an AI application that can read emotions has indicated a lack of enthusiasm in your voice. Or you are unable to get a mortgage because AI gives you a low credit score due to when and how often you charge your phone

Protecting personal data

These are examples of a future that Häuselmann envisions if the law does not respond better to the rapid developments within AI. ChatGPT, personal recommendations on Netflix and a virtual assistant like Siri or Alexa: it is already hard to imagine a world without AI. But how do we ensure that personal data − including data about our health, thoughts and emotions − is effectively protected?

To put it simply: we have to ensure that legislation is more responsive to developments in AI. Take the ‘accuracy principle’, which is enshrined in our European legislation. This means that personal data has to be accurate and up to date. If a company misspells your name, it violates that principle and has to change your name when you enforce your right to rectification, says Häuselmann. 

‘But what if AI makes predictions about your life: What career would suit you? How long will you live?’

‘But what if AI makes predictions about your life: What career would suit you? How long will you live? Will you stay healthy and how much money will you earn in the future? Then it is impossible for individuals to prove that that personal data is inaccurate when invoking their right to rectification because predictions relate to the future. I suggest we reverse the burden of proof here. Not you but the organisation that used your data has to prove the information generated is correct.'

AI companies want clarity

At the same time, says Häuselmann, the EU legislature should also look at another principle, that of fairness. This involves ensuring there are no adverse, discriminatory or unexpected effects, for example on consumers, from using personal data. This principle is very vague, and tech companies working with AI would benefit greatly from clarity. More importantly, a better elaborated fairness principle would protect individuals more effectively from the risks of AI.

‘We need to move toward legislation that is clear yet flexible enough to respond to the rapid developments within AI.’

‘The law should do more here to speak the language of AI, so companies know how to respond.’ Häuselmann, who works at the international law firm De Brauw, can see how tech companies are looking to future-proof their AI applications in terms of the law too. ‘We need to move toward legislation that is clear yet flexible enough to respond to the rapid developments within AI.’

Neuralink

Although the development of AI poses risks to our personal data, the law should not block it, says Häuselmann. Technology can be of great value in healthcare, for example. ‘Take Neuralink, the implanted chip that can allow people with paralysis to control a computer. Technology is neither good nor bad in itself. The law should look at its use and the intentions behind it.’

Two worlds

Looking back at his research, Häuselmann is particularly proud of how he managed to learn the languages of two largely separate worlds. ‘I’m a lawyer but I also gave a tutorial at MIT [a tech university in the US, Ed.]. I had expected the tech experts there to be sceptical about my critical view of AI but the opposite proved true. These two worlds should continue to seek each other out for future research.’

Andreas Häuselmann will defend his PhD on 23 April.

Text: Tim Senden
Image: Unsplash.com/bruce mars

This website uses cookies.