Maria Carrillo Castillo is the Head of Legal and GRC for Teneo.ai, a business that has developed technology and AI systems for some of the largest enterprise organizations in the world. In this article, Maria approaches the application of artificial intelligence from a legal, ethical and socially responsible perspective, highlighting the key issues for businesses to consider over the coming years.
Artificial Intelligence (AI) is behind many of the technologies we use during our daily routine, and more people than ever are aware of its presence in our lives. In the field of law, it’s no different.
Most corporate lawyers have inboxes full of requests to check vendors that provide AI solutions to support the likes of; behavior prediction, operation optimization, resource allocation, personalized services or even to design a nice logo for their go-to-market.
However, though AI offers us new opportunities, it also raises new risks and consequences to be aware of.
We walk a fine line, MSA readers, because we can’t act as stoppers on the road to progress but we’re also the main line of defense when protecting data, rights and ensuring compliance with businesses’ ethical standards.
This guide will help you understand the considerations you need to make when adopting an AI technology. I hope you find it useful!
There are many definitions of AI; the Britannica dictionary describes it as:
“The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings1 for the purposes of the proposal for a Regulation on Artificial Intelligence2, an Artificial intelligent System is defined as ‘software that is developed with one or more of the techniques and approaches (…) and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”
Essentially, AI aims to perform tasks to a human-like level.
The tasks themselves haven’t changed, but the tools that allow us to carry them out, have. Our attention to the performance of the task should be the same as if a colleague were carrying them out. However, considering the higher potential risk of an AI system getting something wrong, we should pay greater attention to avoid serious problems.
Why is GDPR a key factor to consider when discussing the use of AI? Well, the answer to this question is uncomplicated. Processing of data is inherent to AI: AI consumes data to be trained, absorbs information to perform tasks and provides an outcome conformed by data… And, considering this data could be personal data, then different risk processing activities should be considered: large scale profiling, data matching, tracking… And so on.
Further, if we consider the outsourcing of services, this can get out of hand and become impossible to track. When we assess a new vendor and their sub-processors, we are used to seeing different entities, located all around the world, hosting data in different countries than their actual residence, each of them performing a small part in an over-engineered system meant to make the tool work. There are too many risks to control.
How do we know if we are relying on a trustworthy AI driven tool in the field of GDPR?
It’s the same way as other processors of our company’s information: by performing a Data Protection Impact Assessment (‘DPIA’), a Data Transfer Impact Assessment (when needed) and asking for a Data Processing Agreement that serves as a basis of your DPIA. This will not only help you understand the security behind the system but will help you comply with your duties to inform your employees or clients who use the AI tool about the purposes of the processing and grounds of legitimacy.
The good news is that the upcoming ‘Artificial Intelligent Act3’, aims to be an extension of GDPR in the field of AI; establishing a categorization of AI solutions based on their risk. This new rule would provide insightful guidelines to assess the use of the tool and its compliance. The higher the risk associated with the tool, the more information they would have to provide.
The recommendation issued by the European Parliament on a civil liability regime for AI, estates Liability4:
“Ensures that a person who has suffered harm can claim and be compensated from the party proven to be liable for that harm or damage, and on the other hand, it provides incentives for natural and legal persons to avoid causing harm.”
To be liable for a damaging act there should be a causality between the expected performance of an act and the actual harm. We could argue that there is not any foreseen issue in this case, we could follow national law in terms of civil liability and fault-based tort law or even the Product Liability Directive (it does not matter if it is a 30-year-old rule, still up to date).
But at the time of analyzing the situation, we would come to many questions: what can be considered a damage? What is a defect? Can we really know what to expect from an AI tool? Who is the producer? Manufacturers, developers, programmers…?
The European legislator5 has a clear view: we should agree on definitions and narrow down the scope for liability in cases of harm produced by AI tools, but also, harmonize all different laws in terms of liability to guarantee protection at every level.
The first question we need to resolve when we are considering a new AI system is: who controls (if any) the AI system?
If they tell you that it cannot be fully controlled, then you should agree on a liability scheme before using it and ask for documentation on what to expect when using the AI system.
Then, consider your next step: seek a vendor who can provide you with a controlled AI system, so you know where to go in case of harm, while waiting for the new civil liability regime for AI in the European Union, where there will be different liability rules for different risks.
Another key topic in the field of liability and what you can expect when using an AI tool is Intellectual Property. As explained above, AI is trained using data in the first stage of implementation but after it has been commercialized, the system may continue using data provided by you, to meet your request and to continue learning (machine learning).
So many questions may come to mind on this subject. Who is the owner of the outcome after it has been trained? Was the original data used for the training protected by any IP rights? Did they ask for consent from all the owners of the data? Do they have to ask for consent? In which cases?
The usage of protected data is currently a big concern, we just need to look at Getty6 suing Stability AI for copying photos without asking for consent. When we assess the use of an AI tool that relies on third party data for its training and continuous learning and if we want to avoid being part of an IP claim, first we need to know the applicable jurisdiction. IP protection highly differs among countries; we have to compare the fair use doctrine from the US, the European Directive 2004/48 and national laws. We may have common ground, but there are many differences.
Once we know the applicable jurisdiction, ask for evidence that supports their right to use the data, confirm the use of the AI will be legitimate and that your vendor will hold you harmless for third party claims.
Again, we will have our backs covered any time soon, either by the resolution of the judgment in the US and UK or in Europe for the development of new regulations. The European Parliament is eager to develop a framework related to AI and IP that will consider the balance between the autonomy level of the AI and human intervention, the source and purpose of the protected data, the author, the differences between derivative and original work and the grounds where authors would share their work to help AI develop, among many other closely related topics.
At Teneo.ai, we are eager to understand the environment and remain atop of the concerns regarding our products. A colleague of mine wrote that the moral responsibility of an AI tool should be owned by every stakeholder that was behind its development and that they should agree on how to avoid misuse and mitigate any dangers to society with safeguards.
There are a vast number of topics to consider relating to AI and ethics, and it’s almost impossible to cover them all comprehensively, so I’ll take the opportunity to provide a short summary of the most pertinent topics and let you take it from there.
We want to feel secure when we use AI, to understand what is behind it and to know who is responsible for the AI performance. To help achieve this goal, a human centric approach and transparency of AI-driven systems that are decision-making systems is key.
Disclose the security behind the tool, the process followed and state the level of trustworthiness should be an axiom in this field. A lost topic in the wish list, there are companies like Microsoft that already do it7.
This subject is currently on the desks of US Supreme Court8 and roundtables in the European Union9. This means, luckily, that we will shortly convert the theory into practice.
If there is no human oversight of an AI driven tool, then how can we rely on its safety, accountability and more importantly, how are we going to trust it? Every reliable system has a need for troubleshooting, overseeing, controlling… During its implementation and commercialization, AI should not be different. Just because we are anxious to see what AI can do on its own, it does not mean the result is going to be useful or trustworthy.
If we want to get the most out of AI, a human steer is vital.
We are all biased. It is something natural and difficult to avoid, but we have the chance to filter the information we use when training an AI system. This information shall represent the society and not be contaminated, so it can guarantee an accurate reflection.
The assignment to avoid bias does not end here, if we consider Machine Learning: the system will continue learning, thus, harvesting of data will still be a need to maintain a non-discriminating system.
You may remember the movie ‘‘I, Robot’’ where the robot goes against its manufacturer. This happened because it could not be controlled any longer and something like this happened to Meta’s Blender Bot310, where it says the owner of the company that created it is “too creepy and manipulative.”
Again, when a company intends to give a trustworthy service, you see initiatives like the one from Google11 where they introduced a 10-shade skin tone scale to address the AI bias.
The moment we use this kind of AI uncontrolled tool, which gives non reliable outcomes, we can’t help but feel that those are not fair tools, which leads us to the next ethics related topic:
AI driven tools can help us, among many other things, to compile the information we need to understand a topic in milliseconds, route us to the correct agent when we call our telecom provider and even suggest the best doctor in the area.
Once we have the information we decide, this decision is based on the information provided by the tool. If the tool is biased, the information will be too, and our decision will have been made based on unfair and biased information.
Unfair bias could lead to undesirable implications for us individually and as a society with marginalization, inaccessibility, and difficulty in accessing services. The development of unfair and irresponsible AI would lead to a loss of trust, or worse, in AI in general, while this should be a support rather than a burden.
These are just a few considerations for you to make as your company is thinking about using an AI tool. However, as the technology progresses, it is likely that a whole new set of questions will need to be asked.
We are living through an incredible expansion of technology but a sharp degree of conscientiousness is vital for mitigating risks. AI may be able to transform the way we live and, when used properly, we may be able to resolve some of the greatest threats to the world, which is why all of society should contribute and participate in the development of these technologies as well as the laws that govern its progression.
Sign up for a regular dose of news and updates from the legal landscape.
Get the latest updates about legal and privacy from experts in the field.