The European Commission has proposed new rules to help people harmed by products using artificial intelligence (AI) and digital devices like drones.
The AI Liability Directive would reduce the burden of proof on people suing over incidents involving such items.
Justice Commissioner Didier Reynders said it would make a legal framework that was fit for the digital age.
Self-driving cars, voice assistants and search engines could all fall under the directive’s scope.
If passed, the Commission’s rules could run alongside the EU’s proposed Artificial Intelligence Act – the first law of its kind to set limits on how and when AI systems can be used.
Artificial intelligence systems are trained on large amounts of data or information to allow machines to perform tasks which would typically be considered a matter of human intelligence.
This means victims will not have to untangle complicated AI systems to prove their case, so long as a causal link to a product’s AI performance and the associated harm can be shown.
For a long time, social media firms have hidden behind the caveat that they are merely platforms for other people’s stuff and therefore not responsible for the content of it.
The EU does not want to repeat this scenario, with companies which make drones, for example, getting off the hook if they cause harm just because the firm itself wasn’t directly behind the controllers.
If your product is set up to be able to cause distress or damage, then you need to take responsibility if it does, is the clear message – and perhaps one which is overdue.
Is this unduly harsh on a comparatively new industry? If a car crashes because of the mechanics inside the vehicle, that’s down to the manufacturer. But the behaviour of the driver is not.
Should this draft go through, all eyes will be on the first test case. Europe continues to chase the tail of big tech with big regulation – but is it being realistic here?
According to the European Commission, high-risk use of AI can include infrastructure or products which could directly affect someone’s life and livelihood, such as transport, exam-scoring and border control.
Information disclosure about such products will let victims gain more insights into liability, but be subject to safeguards to “protect sensitive information”.
While such provisions in the directive could make businesses “unhappy”, Sarah Cameron, technology legal director at law firm Pinsent Masons, said the rules helped clarify liability for AI-enabled products for consumers and businesses alike.
“A major barrier to businesses adopting AI has been the complexity, autonomy and opacity (the so-called black box effect) of AI, creating uncertainty around establishing liability and with whom it sits,” she said.
“The proposal will ensure that when AI systems are defective and cause physical damage or data loss, it’s possible to seek compensation from the AI-system provider or from any manufacturer that integrates an AI system into another product.”